FreeNAS, HDDs, and more…

So, today I needed to replace a HDD in my FreeNAS server.  This is a first for me, I’ve never had any issues in the several years I’ve been running these.  I wanted to dredge up more info about the disk before I replaced it.  I found a command that would access S.M.A.R.T features of the drive and give me information such as the model and manufacturer part number which was nice.  I could then look up the warranty information on these other drive and begin replacing any that are out of warranty.

 

smartctr -I /dev/[name_of_device]

 

Useful notes and commands

zpool status – this command will report all storage pool statuses you can also specify a specific storage pool by throwing its name at the end of the command.

 

Best Practice – when using FreeNAS or really this could be used for any situation, its a good idea to keep a log of hard drives.  This company never did that and they probably have enterprise drives that have been in service for 8+ years!  So in a spread sheet, I started a log of when drives were purchased and from what source.  I include the manufacturer, model number, and serial number.  From that point I begin bench testing the drives making sure they are functional at a most basic level and then move on to S.M.A.R.T tests.  I log all the results in the log so I have a detailed overview of inventory and performance.

 

 

Opinion: Why the Nintendo Switch will fail.

Nintendo Switch
Nintendo Switch

I’m going to pre-face this post with, I’m a Nintendo Fan Boy.  I love Nintendo and their games and I will always have the best memories of playing their games.  With that out of the way, let us dive into the signs I see that the Switch will fail.

 

I want to address the parallels I see between choices Nintendo is currently making and the choices in the past made from another little Japanese game company named, SEGA.

In the Early 90s I grew up with “the console wars”.  It was a war to dominate the home video game console market and the main contenders were Nintendo and SEGA with brief appearances from other manufacturers.  SEGA screwed up big time and lost the war, hence why we don’t have SEGA consoles today.  Here are the factors that parallel what Nintendo is currently doing which SEGA did that led to SEGA’s demise:

  • They relied on technical specs and marketing to sell product
  • Lack of good first party and 3rd party games hurt their library
  • Failure to heal the broken trust of their established fan base

The console wars really heated up when SEGA released the Genesis.  With tag lines like, “Genesis does what ninten-don’t” and, “Blast Processing!”  SEGA was looking to make the original Nintendo look like a baby’s toy (and they were doing a good job!)  On top of all this, people started to look at consoles with a more technical approach – what processor and memory specifications does the console possess?  8-bit graphics vs 16-bit graphics was soon a major topic of heated conversations at the lunch table during school.

This tactic by SEGA worked and it sold consoles to both new video game players and old alike.  Snatching up everyone in the market from Nintendo Fan Boys like myself and audiences buying into home video game consoles for the first time, proving to Nintendo that there were no consumers that were out of SEGA’s reach.

Nintendo’s quick answer to the Genesis was the SNES – a console that I don’t believe needs an introduction or explanation.  SEGA’s claim to the 16-bit world was  now in jeopardy because of Nintendo’s new offering.  What could SEGA possible do?  At this point SEGA was loosing ground in the console war and Nintendo was back on top like there was never any competition to begin with.  SEGA didn’t have the strong library of games that Nintendo did, their brand mascot didn’t seem to have the appeal of Nintendo’s infamous plumber.  SEGA’s hardware was on the same level as the competition.  SEGA continued to push propaganda and technical specs down gamer’s throats but no one was buying it anymore.

By this time SEGA was in a panic and trying to figure out how to go one up on Nintendo.  In the 90s was a shift from traditional forms of media like VHS tapes and disks to optical media such as CD/LaserDisc/DVD.  They could hold more information on a more durable medium.  This would be SEGA’s saving grace, or so they thought.

So SEGA released an attachment for the Genesis called the SEGA CD.  Long story short it was a failure for SEGA (however paved the way for the next gen consoles we enjoy today!)  The games were even worse than the Genesis titles and relied on horrible “full motion video” as a gimmick to sell titles.  Load times were slow.  Hooking the attachment up to even be able to play would even prove a challenge for some.  It was also expensive for the time.  It was a disaster every which way you looked.

Time went on, Nintendo continued their reign.  SEGA, still growing even more desperate decided to try to regain consumer confidence by releasing an all new 32-bit…. ATTACHMENT for the already aging Genesis/SEGA CD.  I can tell you right now, no one bought these.  You could find them in a clearance bin for like $10 years after they ceased production.  The SEGA 32X suffered a worse fate than the SEGA CD.  It was prone to functionality problems.  It was difficult to setup and use.  It went back to cartridges (after SEGA already released a CD based system promising consumers, “this is the way of the future!”)  Lets not forget about the poor and tiny game library.

SEGA Genesis,CD,32X Monster
SEGA Genesis,CD,32X Monster

At this point consumers resented SEGA and felt betrayed.

Lets take a step back now and look at what they’ve done.  SEGA has released 3 consoles, all which were technically on paper more powerful than the last, but all of which for the most part (I know there are exceptions) pretty much looked the same graphically.  Those few games which did show a hint of graphical improvement suffered from horrendous game play.  Be it poor controls or just bad game design, the games were just terrible!  The expense of owing all the SEGA stuff in the 90s was damn near $400 by the time you purchased all three consoles and a game or two for each system, maybe another controller you were probably closer to half a grand.

The numbers just didn’t add up for consumers and we turned our backs on SEGA.  They screwed us.  We weren’t stupid all the technical specs and add-on upgrades in the world couldn’t make up for all the fun we were having on the SNES and the Nintendo64.

I could go on to speak about the SEGA Saturn and how it had to compete with the Sony PlayStation when it was released (we all know who won that battle) but I think by now you get the point.  I’m not going to talk about any negatives about the SEGA Dreamcast because there weren’t any, it was amazing and innovative (it was the first mainstream console to feature online gaming) but, by this point SEGA’s reputation was so tarnished by 4 generations of broken promises for better gaming experiences that the trust between consumers was irreparable.

So you’re probably asking now, “How does this have anything to do with Nintendo?”

Lets pickup where we left off with the Nintendo64.  Nintendo decided it was time for a big upgrade.  Enter the Nintendo Game Cube.  The Game Cube was Nintendo’s first console to make use of optical media (as long as you don’t count the CD-i or Nintendo PlatStation – which may make for an interesting post some day).  This meant that Nintendo could pack more game information into a disc – expanding graphical and audio capacities for game developers making it an attractive platform for development.  They even released a device to allow one to play games cartridges for their hand held game system on the Game Cube.  Top this with online multiplayer capabilities and you have one killer game system!

Unfortunately, this is also where our story departs from all the happy good stuff.  Nintendo would go on to produce the Nintendo Wii.  Motion controls, DLC, and a competitive price point are what seduced consumers to purchasing what was otherwise a dated device.  I used to see people joke around about the Wii, saying it was nothing more than two Game Cubes duct taped together.  Honestly that’s not far from the truth.  In fact, that statement

2 NGC = Wii
2 NGC = Wii

may even be a little generous.  When you have two consoles, current generation and last generation and you’re releasing the same game simultaneously on both platforms, that should be a RED FLAG.  You’re pretty much buying the same hardware in a smaller package with a few new “tricks and gimmicks”.  Graphically, it was very difficult to tell the difference between games like Zelda: Twilight Princess – the average gamer probably couldn’t tell.  This is the turning point for some gamers.  I should also point out the Wii had to defend its market share against the Xbox 360 and the Playstation 3, both of which without any shadow of a doubt were graphically superior.

The second major issue the Wii had which wouldn’t be realized until the introduction of Nintendo’s next console was its library.  The Wii’s library is absolutely massive.  I mean mind blowing.  There were almost two thousand games produced world wide with over 1,500 in North America.  The Will was also backward compatible with the ENTIRE GAME CUBE LIBRARY! Throw in DLC and there’s more games than you could ever play in one life time.  Here’s the problem with that.  Nintendo made the Wii so easy to develop software for that there were quality issues.  I would venture to estimate for every 20-30 Wii games, there was one amazing game worthy of the Nintendo Seal of Quality.  I’m sorry but that’s a lot of shitty games and Nintendo knew it.

Which brings us to the Nintendo Wii U.  So mind blowingly un-original and un-innovative they didn’t bother to change the name and just added the letter U on the end.  The Wii U had a whopping 128 exclusive titles, of which i could probably name 10 if you gave me a few minutes to think – AND THAT INCLUDES TITLES THAT WERE ALREADY RELEASED ON THE WII!!!!!! (AND NOW ONE WHICH WILL SEE A DUAL RELEASE ON THE SWITCH)!!!!

Seriously, the Switch is nothing more than a Wii U with better portability and detachable controllers.  Oh and lets not forget the online play system that you

Wii * 2 = Wii U
Wii * 2 = Wii U

need to pay for access to.  Nintendo should have just marketed the Switch as Wii U Slim.  I’m having a very difficult time distinguishing any major difference especially when your kill app, Zelda: Breath of the Wild is seeing a simultaneous release on the Wii U!

 

 

My mind is blown.

So time to come full circle.  Earlier I mentioned SEGA failed because of relying on technical specs to sell a product.  I’ve never in all my years, seen anyone talk as much as they do, about the Switch’ s specs.  The specs are always in question, people already debate, will the specs allow games to hold up graphically to the PS4 Pro and Xbox One?  If your game system isn’t even on the shelf yet and people are already calling into question its hardware capabilities you have a problem.  Nintendo relied on releasing these specs in hopes that consumers would see them and have confidence the system will be a high performance game console.  This backfired.

I also mentioned lack of first and third party software.  Well if the Wii U isn’t a shining example of that I don’t know what is.  The release titles aren’t exactly confidence instilling either.  Two Nintendo original titles, some cross platform games, and some crap-ware.  No thanks Nintendo.  Take me back to your glory days, when your pack in game was a Mario game.

Lastly this brings me to the single most important reason why the Switch will fail.  Nintendo has broken their fan base’s trust.  3 Generations of minimal innovation, weak software releases, and lack of features that other consoles have which, ninten-don’t (yep, I went there)  you can only expect one thing.  No matter how good the switch actually is, consumer confidence isn’t there.  The SEGA Dreamcast was a fantastic piece of hardware which had everything we wanted.  The Dreamcast failed because it was too little too late, SEGA had damaged the confidence of their fans with 4 generations of short comings and broken promises.

The Switch is destined for the same fate.

 

Windows Server Handy Commands: Finding Uptime

server_uptime
server_uptime

There are two examples of command that I have found to be useful in determining the server uptime:

systeminfo | find "System Boot Time:"

and

net statistics server

The main difference I have found between the two is that the first option will give you actual time since last actual boot/power save state and the second option gives you “availability” – in the event you have a server that sleeps with WOL enabled it will give you the sum of the uptime plus sleeping states.

Microsoft used to provide a tool called uptime.exe for better management of time statistics, however it looks as though the project has been taken over by someone over at codeplex.com

http://uptimeexe.codeplex.com/

Leica Geosystems CLM and TruView having strange licensing issue

leica
leica

So I’m currently running a Leica CLM on an EC2 instance.  Its the only thing its running on the Microsoft Windows 2012 R2 Server.  Today I had reports of both the TueView Global and TVG Generator licenses being unavailable.  Sure enough, I log into TVG and there’s no license.  I checked the communication between the TVG server and the CLM server – everything is good.  So launched a remote desktop session on the CLM to investigate futher.  Somehow, the CLM software was completely void of any Leica Geosystems Entitlements!  What the heck?!  I immediately located the appropriate entitlements and activated them.   Upon logging back into the TVG administration portal, I noticed that it still hasn’t re-acquired the licenses that it needed.   I forced it to pull a license double, double checked the CLM’s server IP to make sure it was pulling from the correct source, and everything was just as it was.

I had to get in touch with Leica to have them, “recover” the license.  Not really sure what that means on their end but what ever they did, instantly made TVG authorize again from the CLM without me doing a single thing!  This has happened before when I originally setup these instances.  I wonder if anyone knows if this is related to it being on Amazon EC2 or if its a known bug in Leica licencing…

How to merge multiple vTours in krpano

krpano
krpano

This week a situation came to light where we had two separate vTours that needed to be merged into one.  One of the vTours was massive and would have taken too much time to rebuild from krpano.  This isn’t an overly complicated task at all and really only requires two steps.

 

  1. I opened the smaller vTour’s tour.xml file and copied all of the scenes.  I then pasted all of those scenes to the end of the scene information located in the larger vTour’s tour.xml file.
  2. After than it was just a matter of moving all the pano tile folders from the smaller vTour into the larger one.

One major note: the pano tile folders all must have unique names for this to work.  If you were to attempt to merge tile folders of the same names you attempt to merge the two will fail.

That’s pretty much it!  My Bing maps loaded all the coordinates and all my vTour  hotspots were all correct etc.

krpano example
krpano example

Updating Bentley SELECT Server settings across a domain.

Bentley SELECT Server
Bentley SELECT Server

Recently we migrated our Bentley SELECT server to a new  server at a different site and a new version.  I

personally had never had to do this before.  The firm I work for has had the same SELECT Server in service long before my employment there.

I had to update the settings on every workstation across our domain.  physically going to or remote desktop into every machine to change the settings via the Bentley License Management Tool was not a viable option.  I wanted minimal involvement and the process to be seamless to the end user – no user interaction.

After some research I found out that Bentley packages their products with a tool called, licensetoolcmd.exe which could be called in a script with a few command line parameters to update not only the server address of your Bentley SELECT Server but also the Activation Key.  The following is a sample of the syntax for making your own custom batch file to deploy on your network:

if exist c:\logs\bssupdate.log goto end


licensetoolcmd.exe configure /setting:selectserver /value:name.yourserver.com

licensetoolcmd.exe configure /setting:activationkey /value:bentleyserveractivationkeygoeshere

echo %computername%, %username%, %date%, %time% >> c:\logs\bssupdate.log

echo %computername%, %username%, %date%, %time% >> \\someserver\bssupdate.log

:end

Migrating an Amazon EC2 Instance to another Region

Data Migration Amazon AWS
Data Migration Amazon AWS

Today there are countless cloud services available and your company most likely takes advantage of at least one service in the cloud (and if not you need to really sit down and think about it).  Amazon has been in the cloud game for quite some time now.  Remember your first Kindle?  Yeah, well that used a form of cloud services to make books, magazines, and other media available from a central location across a variety of your personal devices.  In fact, Amazon may very well have been one of the first successful cloud service providers.

Well with the Kindle being such a hot product offering it was a no-brainer for Amazon to dive into other forms of media like music, audio books, television, and movies to name a few.  Well for a couple years now Amazon has taken it to the next level offering more than just cloud media for its subscribers.  Today you can log onto Amazon AWS (Amazon Web Services) and within just a few minutes (literally) you can have a new Microsoft Windows Server 2016 up and running ready to handle anything from DNS and Active Directory  services to file storage and product licensing services.

If you are looking to learn how to harness the power of Amazon Web Services for high availability servers that are scale-able, that is a whole different topic of conversation.  This document is intended for those who are already familiar with the process of creating and managing EC2 Instances and for a variety of reasons want to migrate their Instances to another Region.

This process requires that your take a snapshot of your EC2 Instance.  Taking a snapshot means that you will have to stop running the Instance.  You have two benefits from this.  First off, you will have a nice backup snapshot of the server state prior migration in the event that disaster occurs.  Secondly, you need a snapshot in order to generate an AMI (Amazon Machine Image) from which, you will be able to Launch a new Instance in your desired Region.

  • Log into your AWS account and go to your EC2 Instances.
  • Locate the Instance you want to migrate and right click on it.
  • From the context menu select, Instance State -> Stop
  • You now much wait for the Instance to stop running.
  • Once the Instance has stopped right click on it again and select, Image -> Create Image
  • After the Image is created (may take some time…) navigate to Images -> AMIs
  • Right click on the AMI you just created and select, Copy AMI
  • Select the Region you wish to migrate your server to.
  • Upon finishing the copy to a new region select the new region in the upper right corner.
  • You may then finish the process by navigating to Instances and launching the new AMI

Troubleshooting Bentley Water & Sewer GEMS Installations

One of the most finicky and problematic software products that I’ve ever had to install has to be the from the engineering software publisher, Bentley.  If this software breaks and it needs an uninstall reinstall, it can be a real pain.  Fortunately, once its installed correctly it is pretty much problem free.

The root of most issues with regards to installation of these packages is Bentley’s software development ideology and the lack of support for “mixed software architecture environments” meaning – computers that have both x86 and x64 software running on them.  Water and Sewer GEMS have prerequisites as far as needing to match destination software’s (Microstation, Autocad, etc) processor architecture compatibility.  On top of this, the GEMS products rely on Microsoft Access Database Engines to store design data, so its going to also matter if your Microsoft Office is 32 or 64-bit.

Fortunately most installations of Windows today are 64-bit.  So that’s not too much to worry about.  In my office we only use 64-bit software products (when available) However there’s a situation with regards to our MS Office for other compatibility reasons, that we need to use 32-bit.  So here’s the problem: 64-bit Windows + 64-bit CAD software + 32-bit MS Office = NO GEMS PRODUCTS FOR AUTOCAD!!! So frustrating!

The easiest thing to do is uninstall GEMS products, and then uninstall the following:

  • Microsoft Office Products
  • Outlook
  • Any Additional Office Products
  • MS Access Databases (there may be more than one installed)

Once you’ve removed all these products from your machine you can then go ahead and reinstall the Bentley GEMS products, and they should finally generate the required shortcuts to access associated programs via Autodesesk Autocad software packages.  But wait!  There’s more!  Once you’ve done this don’t forget that you also need to reinstall all the Microsoft products that you’ve uninstalled from your computer.  After this is done make sure you restart your computer and check the Bentley software again to make sure its still functioning.

How to use RSYNC to make backups of your data.

Practical Application of RSYNC

For High Availability Mission Critical Environments

RSYNC- copies files and folders to or from a local and or network destination and is considered the “swiss army knife” of the file replication world.  The purpose of the document is to prove a practical approach to the use of rsync – everyday use.  Any one of us can read a man page and this paper is not intended as such.

The following is a breakdown of the most common switches and syntax that I have found useful for performing quick backups.

Most common switches:
  • -u                                             copy only updated or changed files (must be first option)
  • -ah                                          always add this switch
  • –progress                          gives verbose output
  • –delete                               remove files at the destination that aren’t in the source
  • –compress-level=0     provide no compression
  • –inplace                             create files and folders respectively

Lastly, include the source and then the destination, always include the source first followed by a white space and then the destination.  Here’s what a pretty basic backup command looks like:

rsync -ah –progress –delete –compress-level=0 –inplace /source /destination

Additional notes and advanced application:

Copying server to server:

(arguments always given for path in *SOURCE* -> *DESTINATION* format)

rsync -ah -e ssh –progress –delete –compress-level=0 –inplace /source root@192.168.*.*:/destination/folder

Use switch -u to copy only files that have been updated or changed, this option *MUST* be the first one declared

rsync -u -ah -e ssh –progress –delete –compress-level=0 –inplace /source root@192.168.*.*:/destination/folder

An extremely useful option to invoke is bwlimit which is used to limit the bandwidth use…

–bwlimit=5000

The number is interpreted as kb/sec and the option must the last option called before the source and the destination and entered.

rsync -u -ah -e ssh –progress –delete –compress-level=0 –inplace –bwlimit=1000 vgizzi@192.168.60.38:/home/vgizzi/origbackup/* ~

SPECIAL NOTES ON SYNCING BETWEEN
A SYNOLOGY RACKSTATION AND A FREENAS APPLIANCE

Anyone who has used a Synology Rackstation has undoubtedly become familiar with more than one or two quirks.  Use the following code to get rsync copying to a FreeNAS appliance correctly.  Had I made the decision originally, I would have saved the money spent on the Synology and bought two Dell PowerEdge C2100 servers and threw FreeNAS on them.

On the Synology

cd /usr/syno/bin/

chmod root:root rsync

cd /

cd usr

mkdir /usr/local/

cd local

mkdir /usr/local/bin/

cd bin

ln -s /usr/syno/bin/rsync /usr/local/bin/rsync

On the FreeNAS

cd /usr/local/bin/

chmod root:wheel rsync

cd /

cd usr

mkdir /usr/syno/

cd syno

mkdir /usr/syno/bin/

cd bin

ln -s /usr/local/bin/rsync /usr/syno/bin/rsync

I had to sync from the Synology to the FreeNAS because of rsync protocol data-stream errors (presumably because FreeNAS root group is set to wheel and root as an auxiliary group)  I don’t believe this will be an issue when going from FreeNAS to FreeNAS as both will be set to wheel.

Additional options:
  • -t            Preserve date and time stamps
  • –perms        Preserve file permissions (or attempt to at least)

RSYNC in its final form!

rsync –rsync-path=/usr/syno/bin/rsync -u -a -h -e ssh -r -l -t –no-p –no-g –progress –delete –compress-level=0 –inplace –bwlimit=5000 /volume1/hdswork/* root@192.168.63.30:/mnt/SHARES

This script can also be used to back up local files as well