electronic.alchemy :: Triton Notes
electronic.alchemy
where the past meets the future
triton > Triton Notes

Triton Notes

Created by hww3. Last updated by hww3, 11 days ago. Version #1.

About a month ago I set up a mini Triton DataCenter on a Dell R710. Right now I don't have any separate compute nodes, and am provisioning VMs on the head node. Since this is a simple setup in my basement, I don't see the need (at this point) to have a separate headnode for 5 VMs. Overall, I've been really pleased with my Triton experience. A few notes of problems I have run into:

ZFS Disk layout

You don't get any real control over how the zpool is laid out when setup does its thing, and ZFS doesn't really allow you to change the pool layout once it's created. I really wanted a 4 disk raidz pool, and the only way I was able to get there was to start out with a dummy disk then migrate the data to a new zpool set up the way I wanted and then rename it. I ended up wasting a lot of time getting everything set up the way I wanted due to having to do a lot of zfs filesystem rebuilding.

What I have discovered is this:

1 disk -> concat

2 disks -> mirror

3 disks -> mirror + log

4 disks -> mirror + log + cache

etc

I'm completely fine with adding the log and cache disks later but it would be nice to have the ability to specify how the core disks get used (perhaps an option in the json inputfile?), even if it's just a mirror/raid1z/raid2z/etc designation.

Network behavior and packet loss

Have had some truly weird network behavior on VNICS(ie external0 vs bnx0): high rates of packet loss (upwards of 25% over time) on a basically quiet network. Traced it down to one of 2 possibilities: having a Netgear R6250 anywhere on that network range regardless of where it was physically connected, or possibly SoftEther VPN. I got rid of the Netgear device and replaced it with a pfsSnse applicance and things were back to good, but had some periodic problems when I was testing SoftSther for remote access. I haven't investigated further but have had identical problems with both the onboard NIC(Broadcom Net Extreme II) and a pci-e adapter (an Intel T310)

DHCP Client woes

Any container or KVM that needs to perform DHCP requests is probably not going to work. These seem to be filtered. So, for example, using pfSense  in a KVM to connect to upstream internet that assigns addresses using DHCP doesn't work. Same for running a remote access VPN server such as SoftEther that wants to get addresses for connecting clients.

Running a DHCP server is ok (and well documented), just turn off DHCP spoofing protection.

Zone safeguards

It would be really great to be able to lock a vm so that it can't be deleted. I have accidentally deleted a vm because it had a uuid that had a similar prefix to the one I wanted to delete. Something like "vmadm destroy -i UUID" would be great, especially if it printed the respective line from "vmadm list" before prompting.

Update: turns out there is a way to effectively lock a vm: the indestructible_delegated and indestructible_zoneroot options on the vm record. Setting these to true enforces a 2-step delete.

Data storage

Finally, I know that this goes against the Triton mantra, but unfortunately there are certain situations where data needs to be preserved as vms that use it are created and destroyed... think very large traditional databases, or filesystems with data for which zfs send and recv would take too much time.

Recognizing that such things tie instances to specific hardware, it would be really great that if I had a disk shelf containing some zpool other than zones on a machine, there were some method to lofi or delegate a dataset.

Update: there is a filesystems key that can be used to mount filesystems, but it doesn't appear to be updatable, so the zone would have to be created with the data. Alternately, zoncfg could be used to add the filesystem, but its presence would not be reflected in vmadm.

KVM fanciness

I was able to crib some work done for FreeBSD and created integrated KVM images for NetBSD 6.5.1 and 7.1 that seem to work just fine with all of the SDC bells and whistles. The guest integration is simple and elegant.

The KVM drivers don't seem to work with non-server Windows OS. Redhat's virtio driver disk includes drivers that do work (for virtio and NetKVM)

On "private-cloud" Triton, the windows administrator password doesn't seem to be populated automatically, so you end up with a vm that can't be logged into. One solution: change the setup complete script on the guest tools image to look at root_pw instead of administrator_pw. This seems to work, and has the benefit of creating sufficiently complex passwords that the one-shot password change script doesn't fail to set a password because a manually provided administator_pw value is too simple.

The qemu-extra-opts key can be used to pass flags to the qemu server process. This is useful for specifying a floppy drive image (required for the virtio drivers under XP) and so forth. Just remember to unset them after use so that you don't have a floppy drive perpetually mounted up. 

Not categorized | RSS Feed | BackLinks

comments powered by Disqus