UHD 4K Home Theater Upgrade Hell

Many folks embark upon a home theater upgrade only to find it a tad more difficult than they expected. My goal was to get everything in my Family Room to UHD 4K. While it looks terrible, I was able to achieve that by putting an external rack next to my built-in TV / speaker cabinet. But wait, my house came with built in component racks for this stuff! This is where my hell begins. (Current “hacked” set up next to my TV below).

Hacked External Media Rack

Great House, Great Cabinets, Aging Tech

Almost six years ago, I bought a wonderful house, originally built by the Minnesota Vikings great, Joe Sensor, and later upgraded amazingly by the daughter of the founder of Best Buy. It was clearly Geek Squad city in here for weeks. I have home theater set-ups in both the Family Room and the Media Room in the walk-out basement. Both using embedded racks in cabinets built into the walls with AMX control panels to control everything from window shades, lights, on down to the TV and the components. The main issue I have is that the embedded rack cabinets are on the opposite side of the room from the giant built-in TV / speakers cabinet. The cabinets and the embedded racks are really well designed. They can be pulled out on attachable rails (see below) and have articulating wire guides. There are two racks – one for the main components of receiver, amplifier, etc. and a second one for old accessories that are now unnecessary such as DVD players, CD players, Blue-Ray, VHS, etc. Pretty cool racks huh?

This does not sound terrible at first consideration until you realize it was built around 2005 before HDMI cables, HD or UHD. It was even before common use of Cat-5E or Cat-6 Ethernet cables. The TV and components in the house when I bought it were pre-HD. For goodness sake, we cannot have that! I would lose my Platinum Couch Potato card.

So What is the Problem?

So the main issue is getting the UHD 4k Receiver connected vi HDMI 2.0 to the UHD 4K TV across the room (thus my hacked set up with external stack next to the TV cabinet). Unfortunately, the Geek Squad (or home re-modelers) left no auxiliary conduit between the racks and the TV cabinet. The conduit used between the rack and TV is absolutely stuffed with speaker wires and various coax wires. It is impossible to re-fish anything through that mess.

Potential Solutions to Home AV Hell

Fortunately, there is one Cat-5 (not Cat-5E) wire that seems to run directly between the racks and the TV cabinet. I have not buzzed it out to test if it is a “direct” point-to-point wire or if it goes through one of the basement Ethernet “home run” switches. It turns out, there are a few HDMI UHD 4K extenders that run over Cat-5E or Cat-6. While the cable I have is Cat-5 – it might work if the distance is short enough. So an HDMI Extender over Ethernet is my best option.

Of course, there is also the option to rip up the walls, ceiling, floor and run a 50 foot HDMI cable rated for UHD 4K. Of course while I was in there, I would add a bunch of Cat-6 cables for any type of future expansion since that seems to be the type of network cable used for wire converters. This seems crazy expensive considering the path through my walls that would be necessary (the room is 2 stories high) and I don’t have the home wiring diagrams as to the routes they took.

One other option I have is a wireless HDMI solution. Right now, I only see systems that support HD quality HDMI over wireless. This might have to be the setup I use when I sell the house so that everything is in an enclosed cabinet and the place looks high tech (even though it will be low tech HD). UPDATE: I just found some wireless UHD 4K @ 60 Hz by J-Tech. $500 for the pair. If the $300 Ethernet based pair do not work due to my cable only being Cat-5 and not Cat-5E, I now have an option. It says line of site. I wonder if I could cook a hot dog on one of those antennas.

Conclusion – more to come

I have ordered the “No Hassle AV” UHD 4K Extenders (see Ethernet extender link above) that run over Ethernet. I will test them and see if it will solve my dilemma. I will post updates back here as I make progress (or lack of progress). Comments and feedback welcome on Facebook where I posted this article.

Ted Cahall

Amazing or Amazon Web Services (AWS)?

Working with Amazon Web Services (AWS) is actually quite amazing.  It is not just a hosting platform – which is all I initially needed when I launched marrspoints.com.  You know, the mundane, standard bit of Java, Tomcat and MySQL hosting.  After launching the FinTech start-up, TheSubtractor.com, with Scott Scazafavo last year, it has been a fantastic journey into the many application services that they offer as well.

It is so nice that the Route 53 DNS service integrates so seamlessly with their SSL certificates and elastic load balancers.  It makes certificates dead simple and painless.   It feels like the Geico caveman commercials compared to the work of making SSL sites of the past.

Email services are a snap with Simple Email Service (SES).  Standard email client code such as Apache Commons work right out of the box once your account and domain are cleared by Amazon to send.

Email integration is also done so elegantly.  With bounce, complaint, and delivery notifications done in their Simple Notification Service (SNS) via JSON messages.  The costs are crazy low if you do not mind making the glue to your applications and the services.  Even text messaging is super cheap when you move to add texting in addition to email.

Need a CDN?  No problem, start with some S3 buckets as the backing store, put AWS CloudFront in front of it, add in an SSL cert for the secure pages, and your are up and running in minutes!  Again, dead simple to implement.  Want to make the image management of your CDN native to your in-house developed tool set (versus using the AWS S3 console to upload files)?  Use the AWS SDK in the language of your choice to list the files and upload new ones.  It’s a little sad the sub-directories are not real in S3, but with some code you can fake those pretty well.  You might want to break out only the services you need in the SDK, as the Java version all up is over 100MB!

Need to do some AI or Machine Learning?  Amazon has some amazing services that allow you to spin up and only use the compute when you need it.  You do not need to build out a Hadoop cluster on a bunch of ec2 nodes – they take care of all of that for you.  You supply the PySpark or MapReduce code on top of their Elastic Map Reduce (EMR) services.  They are making so many inroads to Big Data as a service that it really lowers the bar of entry for companies to innovate, test and learn with their data strategies.

For a number of reasons, I learned to move to Ubuntu 16.04 for my main version of Linux on AWS.  Too many 3rd party packages were not available in AMI 2 from AWS.  Things like Zabbbix had to be hacked and AMI 2 was way behind on the version of Tomcat in the repo.   Issues with the MySQL client libraries also come to mind.  Just too much fiddling to get a new node to work with my standard development scheme when they work perfectly from the repos with Ubuntu 16.04 (minus the Oracle version of Java of course).

I am sure I am leaving off dozens of items I found along the way of working 100% in AWS for our production site at TheSubtractor.com.  It has been a really enjoyable journey.  So enjoyable I have not written a blog post since July!  We are no longer in Beta and open to all for registration.  I could not be more happy with AWS – it does what it says it does, and it is super easy to implement and integrate with all of its other services.

Ted Cahall

WolkeWerks Alpha Goes Live

Today marks another step along my journey as a co-founder (chief bottle washer?) of a FinTech start-up – we are ready to announce our WolkeWerks Alpha Launch!  It was been an interesting and rewarding experience to say the least.  My co-founder, Scott Scazafavo, and I have spent most week days with at least one video meeting as we hash through the details of our product and the problems it solves for our consumers.  Only two people in the company, and yet we still have multiple locations and a two hour timezone gap.  Flexibility is a key to success.

To me, we have too much polish and too many features for an Alpha.  For Scott, he no longer “cringes” when showing the product to his friends and family.   The joys of being co-founders.

We are really fast, er, not that fast

As usual, things always take longer than one would like or even optimistically estimate.  After Scott and I determined the initial high-level plan, we selected a data provider and I was able to produce a proof-of-concept / prototype in one week.  WE are REALLY fast we thought.  Within a day later, I had skinned the “consumer” version in Bootsrap 3.  OMG we are SUPER fast!  The prototype made it clear we had all of the building blocks we would need (aside from an army of software engineers, designers, research assistants, etc).

If I was able to write a fully functioning, Bootstrap-skinned prototype that was based upon a data service REST API in under two weeks, surely we could get our Alpha product live by April?  May at the very latest.

Where did it all go Blanche?

Hmmm.  Oddly, as we launch today, on July 5th, it is interesting to see where the time went.  Figuring out the features for Alpha took longer than expected as we scoped the ideas down to the bare essentials.  There was over a month spent on looking at technologies and products that we eventually realized would not be used until the MVP phase.  A huge help was the book “The Lean Startup” by Eric Ries.  It helped us focus on much less and testing our way into changes incrementally (and thus an Alpha, Beta, MVP and the major releases).

Of course there is also the fact that we only have one software engineer (me).  I like to think I can code fairly quickly, but in fact, I am also the AWS systems administrator, the Apache and Tomcat administrator, the MySQL DBA, as well as the front-end web developer (JavaScript, jQuery, Bootstrap 4, HTML5, CSS, JSP) and the core services developer (Java).  Oddly, as is the case during my whole career, product management was able to generate more ideas and features than engineering was able to produce at the same pace.  I was doing some product management along with Scott – but again only I was doing any coding or system administration.  Clearly I learned to stop making my own backlog bigger fairly quickly. 😉

In the beginning…

In the early phases we needed to agree on the basic functionality.  We knew the long term product would use distributed processing, AI and machine learning.  These are of great interest to me and so I poured myself into learning them more deeply (and getting them working in my lab) as fast as possible. This was going to be a super cool product and possibly even more fun to build!

A dollop of Hadoop and a sprinkle of Spark

What a dream job!  I was a full-time student again.  One of our product’s main goals is to help a consumer manage their online subscriptions.  My quest for building a Hadoop based AI engine allowed me to add at least five more online subscriptions to my credit cards!  I was a super-user and the product was not even built yet.  Courses from Udemy, Lynda.com, Coursera, Pluralsight were great!  I quickly outpaced the top courses in Treehouse but had fun looking at them. These paid services were in addition to my regular free sites of w3schools.com and others.  I was suddenly an online training expert.  Visions of blogging about the various online training sites and their relative merits sadly danced in my brain.

I took courses about Eclipse, Docker containers, AWS S3, Hadoop, MapReduce, Spark, and stuff I cannot even recall at this point.  All of this while building out and upgrading my Hadoop and Cassandra clusters and testing my various theories on how to make the product sing.  Then a dose of reality hit as I was working my way through “The Lean Startup” book.  Oops.  I had gone way too far down that path when we were not even sure of the core viability of our product.  There were MUCH simpler ways to achieve the product viability testing we would need without the AI engine working day one.  Well that was at least one month of “fun” research that was pointless to our Alpha.  Ouch.  That was a big chunk of time lost – regardless of how much fun it was.

Time for a road trip

Once I realized I had lost my way according to the “start-up bible”, I quickly re-focused my interactions with Scott.  We decided we needed to spend some time in the same city (with a real whiteboard) to flush out the phases of the product roadmap.  Scott and I chose Denver as it was in between Seattle and Minneapolis.  We used Trello (for free) and carved out what the Alpha, Beta, and MVP versions of the product needed to be (knowing even Alpha would shift as we continued to determine feasibility).

Real collaboration – at the code level

The next issues was linking Scott’s product features and design to my coding.  Sounds simple, but we needed to agree on simple things such as a front-end framework and possibly a tool that would allow Scott to design mock-ups that could be implemented with relative ease inside the chosen framework.  With much angst and encouragement, we agreed Scott would learn a little Bootstrap and select a Bootstrap tool to make the “unauthenticated” portion of the Alpha site, since my version was a white page with a login form (I thought it looked great!)

Since Bootstrap 4 was available, Scott picked a tool that generated Bootstrap 4 code.  No worries, Bootstrap 4 is backwards compatible to the Bootstrap 3 code used in the prototype – right?  Um, no. 🙁

I made the mistake of a 5 minutes hack of the consumer site (integrating the authenticated prototype and the new unauthenticated code from Scott).  It produced a hybrid Bootstrap 3/4-ish site that I showed to Scott.  I think I almost killed the company.  It looked OK-ish (to me).  Scott was so depressed at how bad it looked he was on LinkedIn looking at Junior Product Manager roles.

Staying positive and building momentum

I realized that incremental crap-ism might not be the way to encourage Scott along the Bootstrap 3 to 4 migration journey. I quickly researched all the issues with making the site fully Bootstrap 4 and re-coding the prototype so it looked proper in both the authenticated and non-authenticated states.  Slowly I coaxed Scott back off the ledge when he saw his designs working pixel-by-pixel as he had designed them.

Moving from Bootstrap 3 to 4, creating a working rhythm with Scott as he generated pages and updates to the prototype pages took some time, but the pages started looking better and better.  Eventually Scott learned how to use GitHub and started making some UI changes directly himself.  Now we had put down our rocks and sticks and began cooking with gas.

There were some data service issues that we needed to address.  Then some AWS issues we needed to investigate and correct, plus redirects, SSL certs, password standards, and a lot of other things that  were correctly deemed important into a FinTech Alpha.

Then the real fun began.  The product was getting close enough, but there were some key features that we felt would make the product more useful and pleasing to our consumers, and there was an issues with “the browser wars” that was likely to cause confusion with our Alpha users.  Should we add the features and try to fix the browser wars issue?  In the end we decided yes.  The additional features required about a day of coding.  Well worth it.

FireFox, FireFox, why, why, why?

We decided we needed to make a valiant effort to correct an issue where browsers were auto-populating our site’s credentials into a third-party access credentials form.  We allow our consumers to access their own data through our product, but they need to securely pass their credentials onto their banking service.  If the browser auto-populated these fields with our site’s login credentials , the user would likely pass the wrong credentials to the back-end banking service and the connection would fail.  It should be easy to stop a browser from auto-populating a form field you would think.  Unfortunately, again, no.

Here is where the “browser wars” come into play again.  The HTML5 specification says that there is a attribute that can be used on the <form> and <input> tags called “autocomplete”. By setting autocomplete=”off” the browser should know not to populate that password field with the current site credentials.  Perfect.  Except none of the browsers honor it!

Protecting the installed base – of course!

There is a significant hurdle browser companies implement so users are less likely to consider switching browsers.  Many users allow the browser to remember all of their passwords to the many banking, media, and merchant sites they use.  Smart users do not use the same password for their favorite recipe site as their investment or banking sites.  Who can remember all of those passwords?  Let Chrome, Firefox and Edge remember them for you!  But once a user has done that, it is way too much work to start over again with a new browser and migrate all of those passwords (that they really don’t remember) along with them.  Great product management by the Chrome, Firefox and Edge teams in protecting their installed bases.

This is all fine and dandy, until a site needs to be sure the consumer is not confused when a different set of credentials are being requested.  Auto-populating credentials (especially when they should be different than the site they are on) may cause less sophisticated users to submit the wrong credentials.  This further confounds the user experience and frustration levels.

A partial fix for now – and full fix in Beta

Interesting enough, on a blog from the Mozilla Developer Network, there is an article that explains a simple way to stop the browser from auto populating the forms fields.  And guess what?  It works for the latest versions of Chrome and Edge – but NOT FIREFOX!?   Mozilla explained how to correct the issue (autocomplete=”new-password”) – but then explains they ignore that as well.

Why would Mozilla do this?  Because they have been losing market share to Chrome for years – and they need to be more aggressive in their product design to capture and keep new users by storing their password at all costs.  Sad.

So we launch with Firefox users having some confusion when we (securely) need their credentials to their bank services.  There is a complex JavaScript fix that we will eventually implement that randomizes the field names on load, but changes them to “password” and “username” just prior to form submit.  Sad we need to resort to that.  But we will make that part of the Beta and just pre-warn our Firefox Alpha users.  After all, it is Alpha, and we decide who signs up and what issues we are asking them to avoid.

A curvy roadmap – and then a right turn

So after a month’s travel down a wrong road along the course of developing our roadmap, we have finally gotten to the day of our Alpha release.  We need to remind ourselves it is not an MVP or a Beta – it is just an Alpha.  But to me, an Alpha that is pretty slick and does a lot of what we will need it to do as we roll towards Beta and the MVP.

There are many features left for Beta and the MVP as well as dusting off the machine learning and AI code – but we are on our way!

Ted Cahall

 

Ted Cahall, Moz, and Open Source

Ted Cahall and Moz
Ted Cahall and Moz at the old Netscape offices circa 2009

I am a huge proponent of open-source.  Often I refer to using open-source software to “standing on the shoulders of giants”.  Such amazing leverage to accomplish complex tasks.  Software developers today are the modern alchemists stringing together pieces of the solution as the systems integrator.  My tribute to open-source and Mozilla.  Taken at AOL’s old Netscape offices back in the 2009 time frame.

Ted Cahall

Zoom – FREE P2P Video Conference

Scott Scazafavo and I have been working full time on our new start-up, WolkeWerks.com.  This often places me in my home office reaching out to colleagues for advice and collaboration.  My communication tool of choice has been free peer-to-peer (P2P) video conference tools.  Scott and I have used Skype and FaceTime, but experienced the common video lags and garbled voices.  These were frustrating experiences needless to say.

Zoom Logo

It only take a garage to fall on me

My dad used to say, “I don’t need an entire house to fall on me to learn something, it only takes a garage”.  I think he was telling me to learn from trends when they are still small – and even a garage hurts when it falls on you.

The second time someone (ok a nice recruiter) asked me connect with Zoom, I realized it was a high quality service in terms video choppiness and garbled voices.  I did not look into pricing as I figured it was another service used by larger corporations ala WebEx or BlueJeans.  When a colleague in Berlin sent an invite with it, I thought it was odd that he was willing to pay for a service just to chat with me.  This was on top of another colleague with a pending Zoom call scheduled.  Why does everyone want to see a bald guy on video when it is such a frustrating technology?

FREE Zoom P2P Video Conferencing

Because it really isn’t frustrating anymore.  At least from my sample set of 4 calls now. One to Boston, one for two hours to Berlin, and two to different people in Seattle now.  But the biggest surprise, it is FREE for 2 people for unlimited connectivity.  It is also free for 3 or more people for 40 or less minutes.  FREE is my favorite word as I am an unabashed open source bigot.  But FREE that really works well is amazing.

I love the idea of getting people to try something for free for personal use and then once they fall in love with it, they are happy to pay for it in other circumstances.  I have not tried a 3 or more participant call yet.  I suspect they have this technology so dialed in that once you do a 30 minute call with 3/4 people that runs long, you get hooked on how well it worked and add your credit card to the account.

Check out Zoom

The folks at Zoom also have connectivity modules and upgrades for H.323/SIP systems, LifeSize, Polycom, and Cisco gear in corporations. It is all on their website.  They seem like they really nailed the tech on this so far.

I have a call Monday to London with another colleague.  I would never have asked him to use video conferencing in the past.  Too clunky and messy.  But we are setup on Zoom and it will be good to see his face for the first time in a year – even though we catch up nearly every month.

Check out Zoom. All it takes is a laptop, iPad or a mobile phone.

Ted Cahall

Zabbix – another Ted Cahall recommendation

Zabbix is an open source system monitoring and alerting tool.  Even running a home data center requires monitoring the status of the equipment. When there is an issue, it needs to alert folks that things are not working correctly.

Zabbix logo

Ted Cahall uses Zabbix for Monitoring and Alerting

As I have mentioned, I run several Linux servers at home and in the AWS cloud.  This is great – but it could become a nightmare to know when servers are having issues. Enter Zabbix – it is free and comes included in most Linux distributions.  So it is a natural choice for monitoring Linux servers.  Another great feature is that is can monitor Windows machines and Macs as well.

High Level Zabbix Overview

Zabbix is written in PHP and stores its configuration, monitoring, and alert data in a MySQL database.  All of these are also free and included in Linux distributions.  I would recommend adding the Zabbix repo to your package manager for each of your Linux machines.  The agent version currently supported in Ubuntu 16.04 LTS is on 2.4.7 as of this blog post. Where as I selected version 3.0 in the repository. Those Linux machines are currently running version 3.0.16 and get updated as the code is updated at Zabbix.

Zabbix uses a server to collect the data and store it in MySQL. It also uses “agents” to run on each of the monitored machines. The agents are further configured to monitor certain aspects of each of the Linux machines on which they run.  Zabbix monitors CPU, Memory, bandwidth, context switches, etc. right out of the box for most Linux machines without configuration.

Running in Cahall Labs

Currently I have the agents monitoring the MySQL DBs on some of the Linux servers as well as the Apache web servers and Tomcat app servers. I am also monitoring my Cassandra and Hadoop clusters. An interesting open source feature I found is the ability to  monitor my various APC UPS power back-ups.  Now I know if one is getting sick or when they go offline onto battery mode. This is useful when I am not at home to know the power has gone out.  The agent can also be configured to monitor a Java JVM though its JMX gateway.

I also monitor my Synology NAS servers and my older NetGear NAS with Zabbix.  The AWS production instance of marrspoints.com is monitored for uptime and page load performance (see graph below) from my home data center.  I also track and graph the number of drivers being tracked in marrspoints. Its built in data graphing of is very useful.

Zabbix Graph of page load performance on marrspoint.com
Zabbix Graph of page load performance on marrspoint.com

Zabbix can scale to thousands of servers and has a proxy feature to help offload the main server.  We used Zabbix at my previous company and monitored thousands of servers in AWS as well as our private cloud.  The auto-discovery feature allowed us to locate new VMs and automatically add them to the monitoring and alerting framework.  Zabbix is shipping version 3.4.  I have note tested beyond 3.0 at this time.

Alerts

Zabbix can alert you when something has exceeded a pre-configured threshold.  For a home data center, this may be challenging as it was not clear it would simply use a Gmail account as the outbound sender.  I overcame this issue by adding a SES account to AWS.  This allows my Zabbix server to connect to the AWS SES server and send outbound alert emails to my personal email accounts. See sample email alert via Amazon SES below:

Zabbix Alert email sent via Amazon SES.
Zabbix Alert email sent via Amazon SES.

It also supports sending SMS text messages as alerts.  However, I have not implemented that feature due to the costs of the SMS service.  Email is good enough for my home data center.

Ted Cahall highly recommends Zabbix!

In summary, I find there is very little I cannot accomplish with Zabbix for my home data center (or for the Hybrid clouds at my previous employer). With some innovative thinking, I have seen everything from room temperature to number of people coming or going through an automated gate measured.

If there is a way to get the data back to a Linux server, there is a way to monitor and alert it from Zabbix.  It is the Swiss Army knife of systems monitoring tools – and it is FREE!

Ted Cahall

Home Storage? Back-up? Instant File Sync? Synology!

Synology NAS Servers

One of the most important components of my home data center, are my Network Attached Storage (NAS) servers.

Synology 1517+ Consumer NAS

I have had my old NetGear ReadyNAS unit for at least 9 or 10 years now. It has a whopping 1.3TB of storage across 3 drives in a RAID 5 configuration.  NAS units are great for storing my racing videos that no one will ever watch, old photos now that everyone with a phone collects thousands of photos a year, and copies of my important tax, mortgage, and legal documents.  Some of my friends store TBs of pirated videos from the dark web.  I am a NetFlix and AppleTV guy so that saves me a few TBs.

Goodbye NetGear, hello Synology

While the ReadyNAS served me well, it was long in the tooth and short on TBs.  It also was missing some of the interesting new features that I did not even know I was living without until I bought my first Synology NAS back in 2015- the DS1515+. These guys really have done the whole consumer NAS thing really well.

Main attraction

The main feature I use and like is the immediate file sync of Linux directories on my Linux servers (and one of my Windows 10 desktops as well).  Once I configured this option and selected the directories I wanted synced, all of those files continue to be safely stored on the NAS.  No backups – it copies the files immediately upon edit or save to the NAS file system.  It is also a nice way to grab files from one machine to the other as the systems can all see the disk replicas across the servers.

This does not mean I do not do backups.  I have Amazon Glacier storage and I have those critical legal, tax, mortgage files sent out to Glacier storage once a week from the Synology NAS.  The great thing about that is Synology provides the service that runs on the NAS to do the Glacier backup.  Really simple integration.

Built-in Servers (services)

The disk drives are even “hot swappable“.  No downtime if you have a drive go bad.  Aside from rock solid hardware technology, another amazing thing about Synology is the application ecosystem they provide on the NAS server.  They want you to make this your “server” for everything and anything you do in your home.  Want a VPN sever?  It has that.  DNS? Yep.  Connect with my Macs, Windows, and Linux in their native network protocols? Of course.  It has email servers, video security servers (I bought two cameras to test and they are great), video, photo, audio servers.  There are Active Directory, Email, Network Management, Print, Content Management, WordPress, WikiMedia, E-Commerce, Docker, Git, Web, Plex, Application (Tomcat) and Database servers!  These all run Native on the NAS.  Not just as the disk – but in the memory and on the quad core CPU.

I cannot possibly list all of the features and servers these new Synology NAS units supply.  I have tested many of them, and they are rock solid and dependable.  I never envisioned using my NAS as a “server” other than as a network attached storage server.  Now it can work as so much more.

The more the merrier

The Synology product has me so hooked, I bought my second unit!  A DS1517+ with 8GM of main memory and 30TB of storage (5 disks @ 6TB each).  I used this for the security video storage and as a snap backup of the first unit.  Had I planned it better, I could have arranged these two Synology units in an active-passive mirrored configuration.  This would allow one to take over if the other crashed.  Clearly I do not need that at home.  But it is nice to know that a simple consumer grade products offer these features now.

These really are a great compliment to my home data center’s NUC-based Linux servers.  But I do not use the NAS for Cassandra or Hadoop storage.  That all lives on the SSD drives in each of the NUC units for performance reasons.  I back them up to the NAS off hours.

Highly Recommend Synology NAS

I fully and highly recommend these Synology NAS products.  They do not sell direct.  I recommend finding them on Amazon after you spend hours like I did on their product site comparing models and features.

[Update] One cool thing I forgot to mention before I hit “publish”, is that this unit of course runs Linux.  It is a 3.10 kernel version modified by Synology.  This is the reason so many of these services (servers) are available as a stock part of the unit.  Synology chose to make Linux the engine to run the NAS and  brought along many of the Linux services.  With simple configuration, you can ‘ssh’ into the NAS and work on it as though it were a plain old Linux box.  It is really well done.

Ted Cahall

 

MARRSPOINTS gets some SEO love

The marrspoints.com racing application recently got some SEO updates.  These we long overdue in terms of getting better ranking inside Google.  Now driver’s season results URLs include the drivers name (example for Mike Collins) and the race results include the race name and classes (example for 2017 MARRS 5 SM Feature race).  Most importantly the Points Leaderboards have the class name and season as part of the URL now.

On top of all of that, I automated the sitemap to build nightly and worked with the Google Search console to fix duplicate title tags and content descriptions.

Enter Tuckey – SEO URL Rescue!

This all should have been done long ago.  But features were my first priority.  I used the Tuckey urlrewrite filter for all of the friendly URL magic.  It really is awesome and I am glad I remembered it from all the way back to my CNET days when we used it on a project there.

I still have some clean up to do when the pages are selected by form drop-down menus.  My sitemap tool does not include these paths.  I know Google is a lot happier to not see parameters on the URLs any longer.  It is a LOT of JavaScript magic to rewrite the form action to use the rewrite destination.  So that may be left for another year or two until it works its way up the stack in terms of importance.

Ted Cahall

Surfing to the end of the Internet

Keeping (too) busy

Since I left Digital River at the end of February, I have been working closely with Scott Scazafavo on a stealth start-up idea we had been kicking around.   Most mornings I hit my office early and attempt to further the research or  code base.  I worked on some Java REST API code I wanted to improve from its early usage at marrspoints.com.  I remembered there was a simple test site that gave canned responses to HTTP GET, POST requests along with cookies and the likes.  After a tad of searching, I found it again: httpbin.org – what a nice tool.  Simple yet elegant – and great for testing out HTTP code samples where you just need a simple endpoint.  Tutorials on the Internet should just use this site in their examples – as it likely will not change much.

The dangers of the Internet

This is where the danger began…  As I was done using it for the simple testing I was doing, and was ready to move onto the next phase, I noticed that it had the authors name with a hyperlink.  Since I wished I had written such a useful “demo” or example.com website, I wanted to see a tad more about him.  Through Kenneth Reitz, I learned that I comparatively don’t have many cool hobbies or talents (I am not that great of an auto racer and I have not written books, published music, been a professional speaker or even amateur photographer).   That is all on top of his enormous contribution to the Open Source space.  This guy is REALLY talented. Through his link on his personal values, I saw another link stating that “Life is not a Race, but it has No Speed Limits”.  Of course that deserved a click!

Through Kenneth and that link, I met (online so to speak) Derek Sivers and read his axiom – that “Life Has No Speed Limits“.  And though that story, the life of Kimo Williams and why focus matters.  Focus?  On the Internet with so many lessons to learn?

Saying “Hell Yeah!”

It was great to “meet” three SUPER TALENTED people on the Internet this morning.  People I will likely never meet in person or even exchange emails.  Yet, people from whom I have already learned.  While perusing Derek’s site, I found another life lesson to which I truly try to adhere.   No “yes.” Either “HELL YEAH!” or “no.”

OK- back to that focus thing and getting some work done.

Ted Cahall

Using Postman for API consumption

Being a caveman

So what is wrong with curl?  Nothing.  But Postman (at getpostman.com) is simply one of the best tools I have used while developing code that consumes APIs.  This is another case where I was using caveman tech (curl) to do a job so elegantly managed by a service that makes a desktop app that runs on Linux, MacOS, and Windows (and syncs across them).

Sometimes you just need an API

My coding and racing adventures led me to develop and win an award for the marrspoints.com application.  The app consumes two different APIs: race-monitor.com and motorsportreg.com.  I used curl to do the testing dirty work for these as one of them did not publish their response formats that I needed for my JSON parser.

I have been playing with a stock/equities “demo app” for my Cassandra cluster.  The app required me to replace the old Yahoo quotes feed.  I had to do testing on the new feed I chose, and I was still doing it with curl.

Even a stealth API…

Currently, I am now working on a stealth start-up idea with an even more stealth cohort of mine in the financial space.  The data company we have tentatively selected (and their API documentation) pointed me to Postman.  It is awesome.  I have deeply tested the financial access, accounts, instruments, etc.  This was accomplished on my own accounts in only a couple hours of work and research.  Postman is script-able, has variable replacement, etc.  Oh, and the best part, a single developer license is FREE.  My favorite price.

To think Sam Morris at Digital River talked about Postman dozens of times. It never occurred to me to go look at it.  That cost me a lot of wasted time. Especially since I know Sam is “the man”.  Thank you Sam – the second time I heard of it, I knew to go get a copy and learn it quickly.

Ted Cahall