Friday, September 28, 2007

Windows Server 2008 Core not what I had expected.

I decided to poke around the "Core" mode of Windows Server 2008 RC0 this morning. I had expected the graphical installer to finish, and then to reboot into a command line similar to Linux. Nope. It boots like Windows, but without the shell. So you just see the background and a command prompt window.

The resolution is set at 800x600 at this point. I decide to see if VMware Tools will work. It didn't automatically launch, which I didn't expect it to anyway. I switched to the CD drive and ran it manually:
The install gave me a warning about my help files being out of date or something, so I just hit "No" on that dialog and everything else seemed to work fine, and I rebooted. Then when Windows came back up, I had a 640x480 resolution. Yay. I imagine I could change this through the registry or something but I haven't looked into it too deep.

Next step is to get the network up and running. It's still too new to join to my domain, so I'll skip that part. I used this guide for instruction.

netsh interface ipv4 show interfaces
netsh interface ipv4 set address name="2" source=static address=192.168.1.223 mask=255.255.255.0 gateway=192.168.1.1
netsh interface ipv4 add dnsserver name="2" address=192.168.1.5 index=1
netsh interface ipv4 add dnsserver name="2" address=192.168.1.6 index=2
netdom renamecomputer %computername% /newname:W2K82


According to the manual, you can use the Core installation for the following:

  • Active Directory Domain Services (AD DS)
  • Active Directory Lightweight Directory Services (AD LDS)
  • DHCP Server
  • DNS Server
  • File Services
  • Print Services
  • Streaming Media Services
As of now we plan to use it for our 2 AD/DNS/DHCP servers, and our file server. It's a pain to setup, but still easier than Linux. And for these services we won't be touching the machines at the console level very much.

Thursday, September 20, 2007

NTP Time synchronization for Windows domains and ESX Server


We ran into a problem last week that our phone system was out of sync with the time on our computers, and I was asked to fix it. Unfortunately I don't have access to the inner workings of our phone system, but here's how to do it on VMware ESX Server and a Windows 2003 domain (probably Windows 2000 too). Our clients are all Windows XP.

I chose to use the NTP pool from pool.ntp.org. It does a DNS round-robin to a list of donated servers. Most of them are web or DNS servers that also act as time servers. We use 3 different DNS servers in case we happen to be given a bad server (0, 1, and 2) and we append "us" to the FQDN so we only get US servers (visit pool.ntp.org to look up other countries):

0.us.pool.ntp.org
1.us.pool.ntp.org
2.us.pool.ntp.org
For your Windows domain, you need to do the following...

On your Windows domain controllers:
net time /setsntp:"0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org"
For your Windows clients, they will typically get their time info from the PDC. But just to be sure, create or edit an existing GPO that is applied to all of your workstations and servers. You can use the "Default Domain Policy" if you like:

Open "Computer Settings > Administrative Templates > System > Windows Time Service > Time Providers".
Set "Enable Windows NTP Client" to Enabled.
Open the properties for "Configure Windows NTP Client". I set the following:
NtpServer = (Set to your domain name, which will direct it one of your domain controllers)
Type = NT5DS
CrossSiteSyncFlags = 2
ResolvePeerBackoffMinutes = 15
ResolvePeerBackoffMaxTimes = 7
SpecialPollInterval = 900 (I set this to 15 minutes, but the default might be better for larger environments)
EventLogFlags = 0
After making the GPO changes, you can apply it to a computer by issuing "gpupdate /force", or just give it a few hours or so.

On the ESX Server, in the service console, I used root privileges (su -). You can use this handy script by VMColonel, or do the following manually...

Open /etc/ntp.conf with your favorite text editor, and make it look like this:
restrict 127.0.0.1
restrict default kod nomodify notrap
server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
driftfile /var/lib/ntp/drift
And then open /etc/ntp/step-tickers and do the same:
0.us.pool.ntp.org
1.us.pool.ntp.org
2.us.pool.ntp.org
Then run these commands:
esxcfg-firewall --enableService ntpClient
service ntpd restart
chkconfig --level 345 ntpd on
hwclock --systohc
And that's pretty much it. To see the offset between your computer and the timeservers, you can issue these commands...

ESX Server (and most Linux distros):
watch "ntpq -p"
On any Windows 2003/XP machine:
w32tm /stripchart /computer:pool.ntp.org
You might need to set your Command Prompt window width to 100 for proper display.

All that's left is to get our phone system synced up to the same servers...

Tuesday, September 18, 2007

EqualLogic performance confusion

I spoke with our EqualLogic sales rep yesterday, and they are suggesting we go with a PS300E. It has 7,200 rpm drives. Not 10k. We always avoided 7,200 drives like the plague. But here is EqualLogic, one of the biggest players in the iSCSI industry, telling us we should put their 7,200 rpm drives in. Ok...

We had originally planned on buying a PS3900XV, which has 15k drives. But now it's between the PS3600X (10k drives) and one of the 7,200 rpm models. They are all the same box, but with different drive sizes. Right now it looks like the 7TB raw model (PS300E) is the one we should buy, but we might splurge and get something larger.

But back to the performance...7,200 rpm! I had to ask around, so I threw a post up on to the VMTN forums. And here's what I have so far:

  • joergriether tested a PS300E and a PS3900XV, and says they were about the same...
  • Yps has 90 VMs running on one PS400E, and has no trouble with it...
  • christianZ pointed me over to the thread he had started for SAN comparison, and also suggested the PS3600X over the PS3900XV. Not sure if he meant over the PS300E also...
So right now it looks like the PS300E is going to be what we go with. Sigh...

Monday, September 17, 2007

Configuring LUNs for virtual machines

Two nice folks responded to my post on the VMTN forums regarding how to set up VMFS partitions and LUNS for virtual machines on our SAN.

Before I talk about all of this, Stephan Stelter from LeftHand Networks also included a link about "How to Align Exchange I/O with Storage Track Boundaries". Good to know...

But anyway, this is apparently what most people do:

  • Create VMFS partitions for system drives, and place virtual machines that you would back up together on the same LUN.
  • For data drives, map to the LUN directly with the OS iSCSI initiator.
I haven't decided if we should place all our system drives on a single LUN or break them up into groups. I think I want to place domain controllers on their own LUN so I can restore our entire AD without the possibility of a USN rollback. And all our other servers don't really fall into groups... if start to think about splitting them up, it gets to the point where I might as well make a LUN for each of them. And from what I've read so far, performance isn't really a problem if you have less than 16 VMs on a LUN.

So with all the system drives (except AD) on a single LUN, restores become a little more complicated. Not too hard.. just harder than using individual LUNs. We would need to bring up a snapshot on a new LUN, mount the LUN on ESX server (or maybe somewhere else), and copy the VMDK of the machine we're restoring over to the production LUN.

If there is a LUN for each system drive, restoring from a snapshot would just require a click on the EqualLogic array's web interface, and everything would be back.

So what are we going to do? I still don't know yet. And there's plenty of time to decide, and neither way is the "wrong way" (nor the "right" way). So we'll see...

Friday, September 14, 2007

EqualLogic wants to sell us stuff

EqualLogic invited themselves in to our conference room yesterday to show us one of their iSCSI SAN arrays. All I can say is...wow. If you're looking at a SAN solution, definitely consider EqualLogic. And they will jump at the chance to give you a 2-hour in-house demo.

But regarding my post the other day...boy was I way off. Apparently I didn't understand storage virtualization. I was debating how to split the drives up, half RAID 5, half RAID 10, or all of it RAID 10...

EqualLogic introduced me to RAID 50. It requires a minimum of 6 drives not including a spare, so I'm not surprised I never came across it before. It basically takes 2 RAID 5 sets and stripes between them (RAID 0).

The EqualLogic PS3900XV has 16 300 GB drives. 2 RAID 5 sets is 8+8. A spare on each leaves us with 7+7. And the parity bit brings us down to 6+6. So out of the 16 drives, we get to use about 12 of them space-wise, at least 3.4 TB. EqualLogic also reserves some space on each drive for housekeeping, but this shouldn't end up being more than 200 gig or so (the tech couldn't remember if it was 20 megs or 20 gigs reserved per drive).

RAID 5 would have given us 4.5 TB but would have been pretty slow on write speed. And RAID 10 would have given us 2.4 TB. So 3.4 is pretty good...

And even better is the fact that its all virtual. We make a LUN for our file server, one for Exchange, one for SQL, etc. But we can grow these LUNs at any time, so instead of saying the file server gets 2 TB and Exchange gets 600 GB, we can just set them to about 150% of what they are right now and then expand them as they need it.

My only question now, and one that EqualLogic couldn't answer for me, was do we put all of our system drive VMDKs on a single LUN or make a seperate LUN for each server? I've posted a question on the VMware forums, and I'm sure a lot of people thought about the same thing while setting up VI3 and a SAN.

Thursday, September 13, 2007

VMware demos Continuous Availability @ VMworld

VMware HA (High Availability) is one of the main benefits of moving to a virtualized environment. In a typical environment, the VM's disks reside on shared storage. 2 or more ESX servers monitor each other, and if there is a hardware failure on an ESX box, another ESX server can take over and start the machine back up. This is great, because it typically means that downtime for applications is reduced to the amount of time it takes the OS to boot.

I've been watching Scott Lowe's blog as he liveblogs today's VMworld keynote, and apparently they showed something called Continuous Availability.

Continuous Availability actually keeps a running copy of the same VM on a second ESX server. The secondary VM is in a standby state, and the execution stream from the active VM is constantly replicated over to the secondary. If a failure is detected on the primary VM, the secondary takes over with almost no downtime at all.

I can't wait for this to be implemented as a feature in ESX (maybe 3.5?). It takes availability and disaster recovery to a new level, and further justifies our move to a virtual infrastructure.

Wednesday, September 12, 2007

HP 4x4 virtualization beast

HP announced their DL 580 G5, a quad-socket, quad-core server, with 16 drives and 128 GB of memory.

Could be in our future...

Tuesday, September 11, 2007

VMware announces ESX Server 3i

VMware announced ESX Server 3i today, and it's basically an embedded version of ESX server. We just purchased an 8-core HP server with 16 gigs of memory for our first ESX box. We were planning on boosting it to 32 gigs and purchasing a 2nd ESX server along with an EqualLogic iSCSI SAN. After today's announcement, I think we will wait to see what HP's ESX 3i offering looks like. I'm hoping that it will lower our acquisition cost when we do make our 2nd ESX purchase.

And I'm grateful I had a chance to use ESX 3.0.2 in the meantime, I've learned a lot from setting it up, and can now say that I am comfortable in a Linux environment. I can't imagine learning as much about VMware from an appliance rather than actually building the box from scratch. But the simplified installation and upgrades should be nice. Plus there's less shit to break.

LeftHand introduces iSCSI target appliance for ESX server

LeftHand Networks recently announced a software iSCSI target for ESX Server. It basically lets you reclaim all your unused disk space in your ESX box, and stick it on your SAN. I'm excited about this, we maxed out our ESX box to hold us over until we purchase our SAN, and this is a perfect way to use all that space. I'm thinking VM backups in case we lost the SAN for some reason. Or maybe IT storage (ISOs, Ghost images). We'll evaluate our needs after our EqualLogic box gets here and the local drives on the ESX server are emptied.