Thursday, May 8, 2014

PCI Vulnerability 86645: Frontpage Extensions

So I can't lie - I'm not a big fan of PCI compliance in the slightest. One of my favorite quotes about the subject comes from a thread on the hMailServer forum where a user states "PCI compliance is a load of rubbish, the CC companies can't secure their shit, so they make everyone else do it for them and charge them for the privilege." In my position I do have to deal with certain aspects of it from time to time so every time I get frustrated in my dealings with it I can come back to this quote and at least crack a smile.

Recently I ran into a problem that I was tasked with dealing with in regards to QID 86645 - FrontPage Extensions Configuration Information Obtained:

You would think this one is pretty straightforward based on the information in the screenshot. The scanner application I was using says that it was able to obtain the frontpage configuration from the webserver by requesting the "_vti_inf.html" file which contains this information. To remediate it they say that all you have to do is restrict access to this file so that it can't be publically obtained. In doing my research on this vulnerability I found one person who fixed this by disabling anonymous authentication in IIS but I could not do that because this was a public facing site. So I figured I would just find the file and see if I could rename it or delete it - easy enough right?

So I hunted through the IIS directories on the server trying to find this _vti_inf.html file and I couldn't find it anywhere. For that matter I couldn't find it anywhere on the server at all. Figuring that the file not existing was a pretty good workaround I took this information to the security vendor that I work with and they basically said this:

They could still retrieve the information from the server so even though the file itself didn't live there, it was somehow being returned. After some head scratching which in turn led to head banging onto desk I finally figured out why - Sharepoint!

I should have made the connection from the blog post I linked to earlier where he couldn't find the file on the server either but for some reason I missed it. Since this particular server hosted sites that live in Sharepoint that means the _vti_inf.html file was stored inside some content database somewhere that I did not have access to rather than on the server itself in the file system. Is it possible that I could have gone into the database and purged out the file? I'm honestly not sure - I didn't like the idea of going spelunking into a database that I didn't have much business being in so I didn't want that to be the answer. So along with our fine friends at Microsoft we came up with a decent workaround on how to fix this issue without messing with the Sharepoint databases:

Step 1: Launch IIS Manager and select/connect to the server you are working with.

Step 2: From the main workpane, select "Request Filtering"

Step 3: Click on the URL tab and then click "Deny Sequence"

Step 4: In the Deny Sequence box, enter in "/_vti_inf.html" and then click OK.

That should be it! I don't believe you need to restart IIS for this to take effect so you should be able to test immediately by browsing to http://yourserver/_vti_inf.html and if it's working you should get a 404 error but you may not depending on the browser you are using. If you get a blank page instead of a 404 do a quick view source in your browser and if there's nothing there then you should be in business. Hopefully this helps some folks out there save some time who may run into a similar situation!

Wednesday, March 5, 2014

Cisco CCNA DC - Part 1

It's been a while since my last blog entry but I wanted to take a few minutes tonight to write about what I've been working on recently - going after the Cisco CCNA Datacenter certification. In my current position I am basically an IT generalist of sorts in that I have an opportunity to work with a number of different platforms and technologies. "Knower of many, master of none" is a likely grammatically incorrect phrase I often like to use to describe what I do in that I have a very broad skill set that ranges among a number of different IT systems but I don't ever really get to deep dive into any one area. Jumping around all the time will do this to you and they haven't developed a flash acceleration solution for brains yet but I'm keeping my eyes open! That being said I have not gone after a certification since I obtained my VCP5-DCV and I wanted to tackle something new and improve in an area that I felt was a bit lacking which was the networking realm.

My networking experience started a few years ago basically like this:

Sound familiar to anyone? This was basically how I started at my current position when it comes to networking and I had to pick things up and learn as I went. Admittedly this is a fantastic way to learn but certainly not the most practical or stress-free way to go about it. When I started almost everything we had was Dell networking gear and we have since transitioned a lot of that over to be Cisco equipment. I've learned a lot through those migrations and working with some great partners who were willing to help me learn some new stuff. (Side note: Shoutout to anyone who has worked with Dell and Cisco gear and has dealt with all the fun little surprises that come along with that mess.)

So I learned about STP and VLANs and routing protocols and all that fun stuff and I decided I was ready to expand my knowledge and go for a cert. The CCNA Datacenter track seemed like a better fit for me than the Route/Switch path since I spend a lot of my time in the datacenter and I work with a lot of the technologies on this path everyday including the Nexus platform as well as Cisco UCS. The CCNA DC requires two exams for certification: Intro to Cisco Data Center Networking (640-911) and Intro to Cisco Data Center Technologies (640-916). Today I am happy to say that I took and passed the 640-911 exam and I'm halfway to my cert! This was the first Cisco exam I had ever taken so I was pretty nervous this past week or so as I was preparing but thankfully all of the work paid off and I did very well.

There's a number of great resources out there for the 640-911 exam but I wanted to share a few of the ones that really helped me out:

Chris Wahl's Intro to Data Center Networking on Pluralsight: I go back and forth on video training courses because let's be honest - some of them are about as fun as watching paint dry - but this one is very good and I highly recommend anyone looking at the CCNA DC check this out. Chris knows the material very well and does a good job to keep the course interesting complete with Voltron references and some hilarious mnemonic devices thrown in to help you remember stuff. I know that I will never throw sausage pizza away after watching these videos.

CCNA 640-911 Study Guide by Todd Lammle and John Swartz: This book is a fantastic tool that helped me out a lot. In addition to having great material that is very easy to read and follow along with you'll also get a number of practice labs and exercises to help test your knowledge. This book also includes a very basic Nexus simulator that will help you perform the labs which is great if you don't have access to real Nexus gear. Giant IT books can be daunting but this one is definitely worth the price!

Subnettingquestions.com: Everyone going for a CCNA will absolutely need to know how to subnet. I stumbled across this website while I was looking for some additional subnetting practice and it was a great help. Thanks to Kim Nobav for putting this together.

Outside of those resources there are a few other notes I'd like to share for anyone looking to tackle these exams:

  • On these exams when you click Next your answer is FINAL: Coming from mostly a VMware exam background I was not used to this at all. On the VCP exams you have the opportunity to mark questions for review and then if you have time left over at the end of your exam you can go back through the questions that you marked. This is not the case for Cisco - when you click next to move to the next question your answer is final. It made me spend a little bit more time than I normally would on each question but you have to be aware of your time remaining or you can get yourself into trouble.
  • Practice practice practice!: I really learn things in IT by doing so the fact that I had worked on some of these technologies for some time really helped me out but it's still good to practice! Not everyone has access to Nexus gear or lab environments so this can be a challenge - obtaining Cisco gear is not the cheapest of propositions. The Nexus simulator included with the book I referenced is a good starting point if you have nothing else and there are also articles out there for Creating a Nexus 1000v lab in VMware Workstation/Fusion if you have the resources to do that.
  • Formulate your plan and then do it!: Everyone is different but I knew that doing a bunch of cramming the night before the exam would likely not help me. I had been studying for several weeks leading up to my exam so the night before I did my mostly normal routine and got a good night's sleep. Figure out what works best for you ahead of time and then stick to your plan - it will help you be less stressed and more confident going into the exam.

So next up is the 640-916 after I take a little break to recharge. I'm going on vacation at the end of the month so that'll slow me down a little bit but I hope to tackle this in the next couple of months. Stay tuned for part 2 - hopefully with CCNA DC certification in hand!

Wednesday, October 2, 2013

Fun with Tintri Part 3

So last time I spent some time going through my performance testing with the Tintri VMstore T540 to see how it stacked up against our current storage platform and also to get a good idea of just how far I could push it. The results were great and I knew that this thing would be able to handle just about whatever I might need to throw at it. At this point I really wanted to throw some real live workloads at it but I still had a few questions to answer before I could move on to that step. These are the kinds of questions that may not exactly be at the top of your mind when thinking about new storage, but they are important to answer nonetheless. Questions like:

  • What does Tintri's support system look like?
  • What kind of alerting and notification capabilities does it give me?
  • They say it's reliable and all but what really happens if I unplug that cable/remove that disk?

I decided to do some experimenting to see how the T540 would react to various hardware issues/failures so I could get an idea of what to expect and where to look when I had some actual VMs running on it. Before we get into the actual testing breakdown, it's good to take a quick look at the hardware status page provided in the T540 web interface:

This dashboard of a sort is the best place to start when you're having a system problem or think you may be having a problem. It gives you a quick snapshot overview of all the components of your system and how they are currently operating. It also tells you which of the controllers is currently active and this is where you would perform a manual controller failover if you would ever need to do that. This screen will change/highlight information accordingly as the system experiences certain issues as I will illustrate in my test cases.

Test 1: Pull a Network Cable from the Active Controller
I mean really, what else could you possibly try first? This one is the quick simple test to see how the system would respond at the loss of a single network connection. If you have your system properly configured for redundancy the answer is you won't even notice it. I pulled one of my 10 Gig data connections from the active controller and the standby NIC took over without a hiccup. The hardware status page does let you know when a NIC goes down:

Test 2: Pull both Network Cables from the Active Controller
So now I know that the system doesn't even flinch at a single network cable loss, but what if both data connections were lost from the active controller? As you probably guessed, this will cause a failover to the standby controller which happens seamlessly. I had a continuous ping going to a test VM I had on the Tintri during the controller failover and I didn't even see a dropped ping. I can't guarantee that this will happen for you or that it will always work that way but in this case it never missed a beat. The hardware tab reported that both my data connections on the former active controller were down and there were also some system alerts generated. This leads us on to the alerts tab:

The Alerts button at the top of the screen in the web interface will have a number in parenthesis next to it if the system has new alerts to be reviewed which you can see in the picture above. The first entry there is just a notice telling me that my data network is up but is not redundant (when I pulled the single cable) and then there are alerts from when both cables were removed and the system initiated the controller failover. If you configured your system to do so, you'll also receive an e-mail when an alert is generated which I will show in just a little bit. Once you've reviewed the alerts you've got a few options for what you want to do with them - you can mark them as read which removes them as an active alert or you can archive them which moves them to the archived section in case you need to review them later. One of my favorite features here is the ability to add comments to the alerts in case you've got multiple administrators working on a single Tintri system so you can leave notes for issues that pop up. Of course this gives me a chance to leave some horrendously awful comments for my poor co-workers. I wonder if Tintri support sees them in the autosupport data? One can only hope!

Test 3: Pull a Power Cable
This one is another quick and easy thing to test and it works as expected. A T540 chassis has two power supplies and a single one can power the entire system. Pulling a power cable will generate an alert in the alert log and will send you an e-mail. I noticed that it did take a little longer to get this alert than some of the other ones but that may be by design due to inherent issues with power.

Test 4: Pull a Hard Drive
Anyone who has ever worked with storage knows that drive failures are a bit more common than we would like so it's important to know how the system is going to handle it. I walked up to the chassis and yanked the contents of drive bay #1 from the chassis to see what this would do and as expected the system handles this without issue. My test VM kept on kicking and didn't notice a thing. Going back to the web interface the first thing that I saw on my hardware dashboard is that the drive I removed changed from green to red to indicate that it had been removed and it also showed one of the healthy disks was in rebuild mode.

I only pulled the single disk in my testing but according to the Tintri T540 Specifications each disk group of SSD and HDD is in a RAID-6 so it should be able to support 2 disk failures to each RAID group. After I noticed the change on my hardware dashboard the e-mail alerts started to flow in. The first image shows the alert I received when the disk was removed and the rebuild started:

And this next one is the notification after the disk was reinserted into the system:

Tintri Support
One of the things that I really wanted to experience was interacting with Tintri support to get an idea of how the support process works and figure out what I could expect when I need to get assistance. I didn't want to submit a service request for no reason but I quickly found that it wouldn't even be necessary. During initial system setup we configured the T540 to send alerts not only to my team but also to Tintri's support team and by doing this support cases were automatically created when alerts were generated by the system. Here's an example of a case e-mail I received after the controller failover I caused in Test 2 by yanking both of the data connections from my active controller:

Now in my case I was just testing so I didn't have a need to engage their support personnel but it was nice to see that a case was automatically opened and I can go forward with it if I needed to do so. If you don't respond to a case that is automatically opened it will be closed after a period of inactivity or once they see that the condition has cleared so there's nothing you have to do there.

One thing I do want to reiterate is that their support team seems fairly proactive in reaching out when they see that your system is having issues. At one point I was doing some testing with Horizon View desktops on the T540 and I had global snapshotting turned on which caused it to try to snapshot some replica disks that simply should not be snapshotted. This generated an alert every time it tried to do this and at one point after a few of these alerts I got a call from Tintri support just to ask me if everything was OK and offer assistance. This was pretty great to see and it's I have just not experienced with other storage providers out there.

In general if you need to submit a support case that isn't automatically opened you would log into the support portal website where you can select from all the appliances you have registered and open a case on the one experiencing the problem. While they recommend you submit cases via the support portal there is also an 800 number you can call if you need to open a case that way. I must say they do keep the web submission form nice and straight to the point:

Conclusions
While I didn't test every possible scenario here it's clear to see that the system is designed with reliability in mind and can handle most of the common fault scenarios that you will see. Tintri has provided a simple interface to allow you to quickly get the health/status of your system and be able to view and respond to any alerts that may be generated. Their support methodology allows for manual/automatic case creation and their staff takes a proactive approach to case management which is a great thing to see. Time will tell as Tintri continues to grow if they can maintain this approach but for the time being I'm very happy with the interactions I've seen from their support.

As I've been writing this entry I've finally started to move some stuff in my environment over to the Tintri and so far things are looking good. I'm not entirely sure what I'm going to write about next but word on the street is that Tintri OS 2.1 will be coming soon along with the very intriguing Tintri Global Center and if that is the case I will certainly be upgrading to that and taking a closer look into the functionality that it will provide. We'll see if something else hits me before that time but either way stay tuned!

Friday, September 20, 2013

Fun with Tintri Part 2

In my last post I talked a bit about the hardware and getting set up with our Tintri VMstore T540 system which went very smoothly. I had thought about writing my next entry about going through the initial system setup and configuration but to be honest it's so easy I didn't think it was necessary. Part of me wanted to start throwing some workloads on this thing and see how they performed but I needed to collect at least some performance data before I started moving my existing stuff around. I've never really done any serious I/O testing before and I wasn't entirely sure where to begin, but eventually I stumbled upon a fling provided by VMware called the I/O Analyzer and this thing is a fantastic tool. You download and deploy a small OVA template that creates a lightweight Linux VM. The cool thing is that you can deploy multiple copies of the appliance and control them all from a single instance. There's a great video by Chris Wahl that shows you how to quickly get up and running with this appliance and I highly recommend anyone looking to learn how to use it check it out.

So after watching the video that Chris created I dived right in and got started running a number of different tests against my current storage system and the Tintri T540. There's a few notes/caveats for my setup here that I did want to share:

  • The Tintri system did not have any active workloads other than the test workers on it at the time of testing. My current storage was running an active production workload but it has enough additional capacity to allow for the testing to run concurrently.
  • The current storage system I'm using does have some flash onboard but it is only servicing read operations whereas the Tintri's flash is serving all of the I/O operations.
  • Both my current storage and the Tintri are serving storage to my VMware environment via NFS.
  • Testing Equipment consisted of 3 Cisco UCS B200 M2 blades which have Dual 6-core Intel Xeon processors and 96 GB of RAM. All of my networking is running at 10 Gig.

My goals with this testing were twofold: One, determine what certain workloads would deliver in terms of throughput on each system. Two, see just how far I could push the Tintri to get an idea of just how much performance it can deliver. Without any further delay, let's take a look at some of the results:

VMware I/O Analyzer Tests

Test 1 - Single Worker Process OLTP 4k - 5 Minute Duration

Storage Workload Total IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
Current Storage 4k 70% Read 100% Random 8355.75 5850.52 2505.23 32.64 22.85 9.79
Tintri 4k 70% Read 100% Random 20947.37 14664.28 6283.08 81.83 57.28 24.54

So this first test was supposed to mimic the I/O patterns of a typical OLTP database workload. You can see that while both systems handled it relatively well, the Tintri pushed more than double the IOPS of my current array in the same 5 minute duration. My company does run a decent sized OLTP workload so this test was of significant interest to me to get an idea of how one of my current systems might operate on a Tintri system vs. what I can deliver today.

Test 2 - Single Worker Process Exchange 2k7 - 5 Minute Duration

Storage Workload Total IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
Current Storage 8k 55% Read 80% Random 3338.28 1833.16 1505.12 26.08 14.32 11.76
Tintri 8k 55% Read 80% Random 12837.28 7057.01 5780.27 100.29 55.13 45.16

In this test I used an Exchange I/O pattern simulation which was of interest as I run Exchange in the environment as well, but we really don't have any major performance issues with it. Even so, it was good to see how this workload would be handled on each system. You can see that the Tintri didn't push as many IOPS as it did with the OLTP simulation but it makes sense given that the I/O was more evenly split between read/write operations and the block size went up to 8k. Regardless of that it still did about 4 times the IOPS as my current storage so that was great to see.

Test 3 - Multiple Workers w/ Varying Workloads - 5 Minute Duration

Storage Workload Total IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
Current Worker 1 4k 70% Read 100% Random 1509.71 1057.44 452.27 5.9 4.13 1.77
Current Worker 2 8k 55% Read 80% Random 5005.66 2749.87 2255.79 39.11 21.48 17.62
Current Worker 3 8k 95% Read 75% Random 3225.18 3062.74 162.44 25.2 23.93 1.27
Tintri Worker 1 4k 70% Read 100% Random 13036.61 9132.6 3904 50.92 35.67 15.25
Tintri Worker 2 8k 55% Read 80% Random 12288.17 6759.12 5529.05 96 52.81 43.2
Tintri Worker 3 8k 95% Read 75% Random 5944.5 5645.89 298.6 46.44 44.11 2.33

Current Storage Total IOPS: 9740.55
Tintri Total IOPS: 31269.28

For the next test I decided to bring in some more worker VMs to run some concurrent testing. I reused the OLTP/Exchange workload simulations from Tests 1 and 2, and I also added another workload which I believe was the web server option in the I/O Analyzer. Once again this test showed me that both storage systems handled the workloads fairly well but the Tintri simply has a lot more horsepower than my current system. It really shows you the power of flash storage and the impact that it can have on your major applications. My current storage system is a full blown traditional SAN taking up a full 42U server rack and it's probably got about 150ish SAS disks and this 3U Tintri system with 16 disks is blowing right by it on every test.

At this point I had a pretty good idea of what the Tintri was capable of and I certainly showed that it has a lot more power than the storage I'm using today. My next objective was to try a few tests to see just how much throughput I could push through the Tintri system because why the heck not?

Test 4 - Tintri Stress Test w/ 5 workers - 5 Minute Duration

Storage Workload Total IOPS Read IOPS Write IOPS MBPS Read MBPS Write MBPS
Tintri Worker 1 4k 70% Read 100% Random 8389.25 5872.5 2516.75 32.77 22.94 9.83
Tintri Worker 2 8k 55% Read 80% Random 9139.71 5024.52 4115.19 71.4 39.25 32.15
Tintri Worker 3 8k 95% Read 75% Random 3719.73 3533.8 185.93 29.06 27.61 1.45
Tintri Worker 4 16k 66% Read 100% Random 8548.35 5636.2 2912.15 133.57 88.07 45.5
Tintri Worker 5 8k 20% Read 0% Random 3923.99 784.34 3139.66 30.66 6.13 24.53
`

Total Tintri IOPS: 33721.03

So the IOPS count on this test was a little lower than I expected but still very very good. I re-used a few of the same workload patterns and threw in a few other types from the I/O Analyzer to run 5 concurrent workloads against just the Tintri. Since I only had three hosts to work with I figured that some of the workloads were likely competing with one another so that may account for why I didn't see a significantly larger total number than I did in Test 3.

After posting my first blog entry on twitter Justin Lauer recommended that I try an appliance called Tingle that is provided by Tintri and available for customers on their support site. Much like the VMware I/O Analyzer, this is an OVA template that you deploy into your environment and it can run I/O simulations against your storage. I already had enough good data from the I/O Analyzer but I figured it might be fun to give Tingle a shot too and see how hard it would push the Tintri. I deployed 5 of the appliances and ran their "IOPS" test which executes 20 threads doing random reads and 1 thread doing random writes for a read/write ratio of 71:29. Tingle does not give you the detailed reports that the I/O Analyzer does (you are able to watch the appliance console screen and read the output to get an idea of how each appliance is doing), but my I/O stabilized around 55-60k IOPS and for a very brief period I had the system pushing 75k concurrent IOPS:

I'm honestly not sure how accurate of a test this was and I'm tending to lean toward the numbers that I got from the I/O Analyzer as a bit more accurate as to what I would see in the real world, but mainly what I learned from all of this is that the Tintri's got some serious power boxed up in a relatively small appliance. I think that it's going to be hard for businesses to ignore the performance and ease of use that you can get out of something like a Tintri when comparing against more traditional storage options. The fact that I was able to get this system up and running in very little time and get this kind of performance is extremely appealing to me, and I think it will be to many others as well.

Now that I've got some good data for how my Tintri T540 is going to perform we're moving closer to putting some actual workloads on it! I'm also interested in finding out what kinds of alerting and information that I can get from the Tintri and how their support system works because that's something that can't be ignored when you're looking at new storage. Say what you will about how the support rates from company X or Y, but you know that the big players usually have pretty big support organizations behind their products and the phone gets answered when you really need it to be. I will dig deeper into this and more as my adventure with Tintri continues!

Friday, September 13, 2013

Fun with Tintri Part 1

One of the big things going on in the storage industry right now is the noise being caused by flash storage. Seems like almost overnight companies have popped up and they are putting flash storage almost everywhere. You've got hybrid storage options with flash and spinning disk, all flash storage arrays, host based flash solutions, and probably lots of other types I don't even know about yet. Earlier this week Cisco made some noise when they announced they would be acquiring flash storage provider Whiptail, a company I had never even heard of until I read the press release. This sparked some interesting debate amongst the tech crowd on twitter and various blogs I check out about how this acquisition may fit into Cisco's overall strategy and I suppose that remains to be seen.

So anyway the point is that flash is a pretty big deal right now and it's clear that there are many storage options for the enterprise popping up that are taking heavy advantage of it. Some of these storage options are also moving away from the traditional SAN model and going toward a more purpose built approach or targeting a certain business function. As I run a very heavily virtualized environment with very few traditional physical systems still in operation, I was attracted to Tintri early on in my research and wanted to learn more about them which I had the opportunity to do at this past VMworld. My team did some investigation and talked with some folks out in the industry and decided that it would be worth taking a look at Tintri and earlier this week our VMstore T540 arrived.

Unboxing

The first thing that you'll find when you receive a Tintri appliance is that they do a nice job of packing it up quite well.

Ok so that's not too terribly exciting. Once we got this thing out of the box we saw that the appliance itself is a solid 3U chassis and is very much assembled by our fine friends over at Supermicro, which I have since learned is used by many storage vendors for their offerings as well. I believe I had heard that this was the case at some point but I didn't quite remember until I got it out of the box and noticed the branding on top of the chassis. (Sorry some of these pictures are a little blurry - I'm new at this mess!)

The Innards!

Being engineers naturally the first thing that we did was pop off the top cover and see what's going on inside.

I'm no expert when it comes to case design but it certainly looks like they built this to where people won't screw around with the magic smoke inside these things and I can't say I blame them for that one. It certainly looks well organized inside and they've done a good job to keep you away from the important stuff. We weren't interested in messing with any of the components inside anyway - just checking things out. You'll quickly see on the inside that the Supermicro branding jumps out at you again.

The Bytes

So the T540 system is a 16 drive system with 8 each of solid state and spinning drives. It looks like the HDDs are 3 TB drives from HGST which is a Hitachi company that was purchased by Western Digital (had to look that mess up) and then the SSDs are from Intel. They've rigged up the SSDs into a standard HDD size drive carrier so I wasn't able to see a ton of information about the SSDs themselves without removing the drive and we weren't quite interested in doing all that.

Racking/Cabling

The rails included with the Tintri are fairly easy to get installed which was nice to see. So many times have I purchased various systems and been sent an awful rack mount kit that just makes me want to cry. With the Tintri there's a section of the rail that you have to remove and attach to the system itself and rather than using screws there are some notches on the chassis that the rail slides under and that holds it in place. You may have to use a hammer or whatever you've got laying around to get it in there nice and snug but it goes on without too much trouble. The rails themselves snap into your rack much like the Rapid Rails that I have with my Dell PowerEdge servers. Once you get all of that in place you just line it up and slide the rail sections on the chassis into the catch on the rails on the rack and then tighten it down.

Another nice touch was outside of the standard power cable that you get with most devices (NEMA 5-15 to C13) they also included some adapters to go from the 5-15 connection into a C14 connection which I see commonly used inside datacenters. It didn't help me in my situation (I need C13 to NEMA 6-15P for my PDUs and I keep plenty of extras on hand) but I'm guessing that the guys at Tintri have been in that situation before when you buy a new device and it doesn't come with the right power cables for your datacenter. Thank you for at least giving us some options!

The T540 is a dual controller unit in Active/Standby mode. Each controller has four network connections - two 1 gigabit connections for management traffic and two 10 gigabit connections for data traffic. There's also a slot for another optional NIC that you can add and dedicate to replication traffic but our system did not have that. To have the minimal configuration and still maintain redundancy you'll need four network connections - a 1 GB and a 10 GB connection to each controller. I decided to go ahead and go all out and cable up all 8 connections for a little extra comfort. Naturally I cabled up each controller to two different switches so I would have protection against cable failure and switch failure. There's also two redundant power supplies along with a connection for a KVM dongle so you can access the local console to do the initial config. If you connect up everything it'll look a little something like this:

The story so far..

I haven't had a chance to do much with the system yet but the racking and cabling was fairly painless. In the next few days I'll have an opportunity to start playing with this thing and I'm very interested to see what kind of performance I can get out of it. I hope to share more info as I move through getting this thing up and running and get some VMs running on this guy. Stay tuned!

Monday, September 2, 2013

VMWorld 2013: Part 2/Lessons Learned

So in my last post I summarized a lot of the high points of my VMWorld experience even though I didn't make it through every day. Rather than type up a summary of the rest of the days I thought it might be a bit more beneficial to talk about lessons that I learned from my first VMWorld experience for first timers that may be looking at this next year or in future years. I know that in my preparation I was reviewing tips and lists from a number of notable bloggers and industry folks out there to figure out how to best get ready for the conference and I highly suggest you read those as well to get varying opinions. Without further ado:

1. Wear comfortable clothes - This one was easier for me than I think it may be for other folks who are trying to network to find a new job or opportunity but I wanted to make sure I was comfortable this week. My standard work attire consists of a comfy pair of sneakers, jeans that aren't torn to shreds, and a polo or button up shirt and this is what I wore throughout the entire conference. Comfortable is relative to the person wearing the clothes so this can be whatever you want it to be. I saw people wearing t-shirts and jeans, I saw people in suits, and just about everything in between so odds are you'll be fine with whatever you decide to wear.

2. The food at VMWorld isn't awful - Don't get me wrong, it's not GREAT either but it's certainly edible. I did my best to try to eat Breakfast/Lunch most days at the conference to try to keep my expenses down and most days there were decent options. There are much better options out there so if you have the budget or just don't want to eat the VMWorld food you can do so, but most everything I read prior to the trip was bashing the food so I wanted to ensure I provided my opinion on this one. Plus the meals are a great time to strike up a conversation with other IT pros and chat about the conference or swap IT stories. I had some successes and failures with this myself but I'm glad I made the effort.

3. You can't go to every breakout session even though you might really want to - I booked 5-6 sessions every day that I was interested in going to see and I probably bailed on 1 or 2 daily because there simply wasn't enough time. You'll find very quickly that there is so much to do and see that something is going to come up and you'll likely end up missing a session or two if you book too many. For me, I usually missed a session because I was on the floor of the solutions exchange talking with a customer of a product I was interested in or because I was just super tired and needed to take a break. The good thing is that if you bail on the session that opens up a spot for someone in the standby line who wants to see the session so it balances itself out, but try to schedule the sessions that you really want to see and remember that you can always watch the recordings of the sessions after they are posted online. Which leads me to the next item..

4. Don't take a picture of every slide in the breakout sessions - I seriously could not figure out why people were doing this. The sessions get posted online after the conference and all attendees have access to these recordings, but still I saw phones and tablets going up after every slide change. All I could do was shake my head. Don't be that guy/gal.

5. Take breaks - The days will go by quickly but it's best to remember to stop and take a break every now and then so you don't wear yourself out. This could be stopping by the hang space for a bit, or just arriving at a session a bit earlier than you planned and taking a rest. Going along with this I carried Water/Snacks with me at all times in my VMWorld backpack and that was extremely helpful. It can get a bit hot in the solutions exchange especially with all of those people in there so it helps to have the water nearby. There's also plenty of water dispensers set up if you don't want to carry it around all the time.

6. Use social media - One of the things I enjoyed most during the conference was following along on Twitter with what the big names had to say, but also what people thought of the sessions that I was in. Every session has a session ID which makes for an easy hash tag and I did my best to use these to comment on every session that I was in. Some sessions had a lot of activity on Twitter this way, and others did not which was somewhat disappointing after experiencing some of the more socially active ones. I was also checking badges for twitter handles and I was sad that I didn't see very many amongst the general attendees. Perhaps they weren't on twitter or they just didn't want to share and that's fine either way, but I was hoping to find some more folks to follow outside of the big names and I didn't get a whole lot of them.

Side note from this one: Eric Shanks did a great statistical look at Twitter usage from this year's VMWorld here.

7. Engage the presenters or other big names - I'll admit I didn't do as good a job with this one as I would have liked, but I did better than I expected. Most of the folks are very approachable and will talk to you or just say Hi if that's all you're looking for. One of the people I had a chance to meet and speak with was Simon Long and he was very nice to talk to. His blog was a great resource when I was starting out in virtualization and prepping to take my first VCP exam (Version 4) and I wanted to thank him for putting all of that together. I also ran into Scott Lowe on my way out on the last day and I just said Hi to him and he was very nice and offered me an extra Spousetivities t-shirt for my wife which was awesome. This is one I hope to improve on next year.

8. Have extra room in your luggage - Airline bag fees suck and the last thing you want to do is have to check an extra bag on the way home. You are going to come home with more stuff than you showed up with unless you just completely avoid the solutions exchange which I do not recommend. I ended up coming home with like 17 or 18 new shirts, some bags, plus all the little trinkets like buttons and stickers and such that vendors like to give out. I brought a larger suitcase than I normally travel with for this specific reason and I am VERY glad that I did. I was able to get the extra backpack and all this stuff into my bag which just made it easier to deal with when I was going home (although the bag was quite a bit heavier I do admit). Most airlines allow two carry-ons and if they run out of room in the overheads they'll check it for you for free so that's another good way to handle it if you prefer.

9. Let your vendors know you're going to VMWorld - Most of them are throwing parties or putting on events during the nights of the conference and they will usually extend you an invite if they know you're going. I was fortunate enough to attend the Netapp MVP event which also took place at AT&T Park the night before the VMWorld party but I didn't get an invite until a few days before the conference because my rep didn't know I was going till the last minute. Netapp put on a pretty awesome party and I got to meet some sports icons including Brandi Chastain, Joe Morgan, and Bret Saberhagen:

So let your reps know you'll be there and you never know what you might get to check out! Vendors on the solutions exchange sometimes can get you invites to their events if you stop by and ask them about it as well but I was already booked up before I even arrived.

10. Enjoy it! - It is a professional event and I was primarily there to learn and grow as an IT pro, but it's a heck of a lot of fun too. Take some time to "smell the roses" per say and take it all in and enjoy it. It really is a blast!

Hope this helps some folks that will be heading out to their first VMWorld in future years. If you remember nothing else from what I wrote here do remember #10 - enjoy the experience! I know I did and I'm already counting down to VMWorld 2014. Just 360ish or so days to go!

Saturday, August 31, 2013

VMWorld 2013 Part 1

So I've finally arrived back home in North Carolina after my first trip to VMWorld and I've had a day now to rest and recharge (sort of) and collect my thoughts on my experience over the past week. As I mentioned in my previous entry this was not only my first trip to the conference, but also my first trip to California as well so it was pretty intimidating the past couple of weeks and I was pretty nervous about the whole thing. I did the best I could leading up to the conference to try to keep myself relaxed including several trips to the gym with some pretty heavy cardio and I also had a massage the day before I flew out but there was still some unsettling feelings leading up to our departure. Once I set foot in San Francisco and had a look around the city things started to settle down for me and I was able to relax and enjoy the ride and it was one heck of a good one.

Note: This week went by quick and it would be tough to write about everything that I saw/heard/did so this will mostly be a summary of my experiences and not everything in its entirety. I mean ain't nobody got time for that.

Pre-Conference

I arrived a day early which gave me some time to check out San Francisco and that I did. Along with my former co-worker / current Varrow superstar / always awesome friend Thomas Brown and his wife we set out to check out some of the awesome things that San Francisco had to offer. We were able to spend some time in Chinatown, the area around Golden Gate Park (where we randomly ran into a guy on the street who was from Raleigh and recognized my Hurricanes shirt - love it), the Golden Gate Bridge/Presidio area, and Fisherman's Wharf. Seeing the Golden Gate Bridge was a highlight for sure:

We also trekked through the Presidio to make it over to the offices of Lucasfilm where we tracked down the rumored Yoda statue that we had heard about:

The next day we spent most of our time hiking up to Fisherman's Wharf and checking stuff out up there before sailing out to Alcatraz island. We were disappointed that there was no guided tour of the facility but we were able to take an audio tour and we also listened to a speaker who talked about various escape attempts and that was really cool. I didn't have any pictures of myself taken inside the prison because it really didn't feel like the appropriate place to do that, but we did get a nice shot on the way back sailing across the bay with SF in the background. My hair certainly enjoyed the ride:

The Show!

Sunday night was the welcome reception in the solutions exchange where we spent most of our time talking to some vendors and checking out some of the stuff they had to show. We also ran into pal / former colleague Josh Atwell who was manning the VCE booth and I also got to briefly meet Tim Jabaut and we talked about what was up with the Raleigh VMUG (looking forward to that getting back up and running soon).

Monday started off with the general session and the formal announcement of vSphere 5.5 as the next big release of the platform. The two big things discussed along with this were vSAN (or is it VSAN? I still don't know for sure) and the big one - NSX. I'm not a big networking guy and while I do know some of the basics the finer points of networking escape me so I'm not entirely sure what to think about NSX at this point. From what I saw/heard it seems to be VMWare's answer to "Software-Defined Networking" as it will allow you to control your network stack at the software layer. I don't entirely know how this is going to work exactly, but I am intrigued for sure and very curious to learn more about what this is going to bring. I heard that the NSX lab was one of the most popular choices at the HOL this year which I intend to try out once it gets posted onto VMWare's HOL online if it hasn't been already.

vSAN also sounds like an interesting concept and seemed to me like a tip of the hat to companies like Nutanix that have been doing Converged infrastructure for a while now. The general concept is that you can pool your local storage together living in your ESX hosts and use it as a virtual SAN for cluster shared storage. It sounded to me like an apology for the needlessly complex vSphere Storage Appliance which I don't think was terribly successful as I never heard much chatter about it and I never talked to anyone who actually ever tried to implement it. I did notice and enjoy this quote from Jason Nash during the vSAN portion of the keynote:

@TheJasonNash Bet the @nutanix people have a smug look on their face right about now…. #VMworld

The rest of the day was spent bouncing between breakout sessions and the solutions exchange. I had the privilege of attending the top session of the entire conference: PowerCLI Best Practices - A Deep Dive with Alan Renouf and Luc Dekens. Not only did they show off some cool PowerCLI stuff but they also gave a quick look into an upcoming fling called webCommander. This fling is a web interface that provides an "App Store" style interface for scripts that you can publish for others to use and also execute from the web. I am VERY interested in utilizing this at my company and will be checking this out as soon as it's available. Automation was a big theme of this year's VMWorld and I am looking to find new ways to automate the things that I typically do manually and PowerCLI is a great way to get started in this direction. I've been working with it closely in the weeks leading up to the conference and I will continue to do so as much as I can.

In the solutions exchange I spent a lot of time at the Tintri booth learning more about their product and speaking with their employees/customers to find out as much as I could. One of the first things that they'll tell you about is how easy their systems are to manage. There are no file systems to set up or LUNs to create or anything like that. Every Tintri system gives you one NFS datastore that can be mapped to your hosts and is designed with one thing in mind: running VMs. They also give you per-VM level policies and performance statistics. If you want to take a snapshot of all of your VMs or just a single VM you can do that. You also have the ability to see performance statistics at the per-VM level so you can quickly identify where you may have issues or bottlenecks in your environment and hopefully diagnose the problem much faster than you could with traditional storage. One of the guys I had a chance to speak at the booth was Jeff Greenfield from Calvin College about his experience implementing and running Tintri storage and he had many great things to say about it. I recommend checking out the video linked here as well as some of the other videos they have posted of various customer experiences to get a feel for how folks are currently using it and how they feel about it. One of my goals coming out of VMworld is to learn more about Tintri and hopefully get a chance to interact with one and take it for a test drive.

I've already composed quite a bit here and we're only through Day 1 of the conference! I think i'll have to break this entry up into two parts so it doesn't get too crazy long here. More to come on my first VMWorld trip hopefully in the next few days!