Linux

warning: Creating default object from empty value in /var/www/drupal6.pyxos.net/drupal-6.28/modules/taxonomy/taxonomy.pages.inc on line 33.

Western Digital 1.5TB Green Drives - Not for your Linux Software RAID

I recently bought a couple Western Digital 1.5TB "Green" Drives (WD15EADS) to rebuild my home media storage array with higher-density disks.  I already had one Segate 1.5TB drive (ST31500341AS) I was using for a "scratch" drive  (rpm mock builds, storing MythTV recordings, secondary backup) that I had problems using at all until Seagate released a firmware update, so I was skeptical about buying more of those.  I thought the green drives would be fine; that despite their variable rotational speed, the power savings and ability to spin up to higher speeds when needed would work out fine.

Unfortunately once I had a couple of these drives in a software RAID 5 array I began to notice problems.  When copying data to the array I would see decent transfer rates of ~33MB/s for 30-40 seconds but I would then see the transfer rate drop to 200-500KB/s for 2-3 minutes.  Since the array was initially built with only the WD drives I was pretty sure the problem was isolated to those drives.  Even after adding the Seagate drive the problems remained.

To test my assumption that the WD drives were somehow causing problems I began an rsync and waited until the transfer "paused".  I then did a "dd" against each of the drives in the array, seeking to a new position in the drive between tries.  It consistently showed the WD drive only getting 100-250KB/s while the Seagate drive would get ~90MB/s.

Once replacing all the WD drives with equivalent Seagate drives the array is resyncing at ~90MB/s; almost 3 times what I was getting with the WD drives alone.

When I first started seeing these problems with the WD drives I thought it might be related to the idle command so I downloaded the WDIDLE3.exe program to increase the idle time from the default 8 seconds to 25 seconds, and then later disabling the idle time out.  When this didn't fix anything I downloaded the WDTLER.exe program to enable TLER for the drives which also didn't affect the drives at all (I didn't expect this to since I wasn't seeing data completely stop or the drives drop out of the array).

I'm not sure what to do with these WD drives; while they seem to work fine independantly, they don't perform correctly at all when put into a RAID array.  I'm beginning to get afraid that as the hard drives get larger and larger the complexity of the firmware is growing too quickly for drive manufacterers to keep them performing reliably.

Default gnome-terminal size

Here's the command I found to set the default gnome-terminal height:

gconftool-2 /desktop/gnome/applications/terminal/exec --type string -s 'gnome-terminal --geometry=85x40'

Now when I hit "meta-t" I get the size gnome-terminal that I like.

Oh, to get "meta-t" to work, you might want to set this as well:

gconftool-2 /apps/metacity/global_keybindings/run_command_terminal --type string -s '<Mod4>t'

MediaIndexer project setup

I was able to setup the MediaIndexer (working project name) website and repository tonight so Brady and I have a place to coordinate. Trac + Mercurial was really pretty easy to setup; and I've had the sources in Mercurial for a week or so already (made it nice for copying code around between my computers and tracking changes).

Our goals are pretty ambitious; develop a media indexer that can detect when files go bad and synchronize/backup those files between friends.

Already Brady has some scripts thrown together that accomplish a lot of identification of files which we can then use to create rsync commands to try to pull his damaged files back from my server.

Longer term we'd like to try to create an architecture where friends can have their files indexed locally but search other friends' files for better versions, versions with less errors, or recover their own files with problems if they were synced from the same source.

sshing to multiple ports at the same host without warnings

I ran into an article that addresses a long-standing problem I've had with OpenSSH's known_hosts file; it doesn't store the port for a host so you can't ssh to different ports behind a router without getting warnings about fingerprint mismatches. The article descries how to access multiple machines at the same host. Below is my workflow.

$ ssh host.example.com

Here I accept the fingerprint. This saves into my standard .ssh/known_hosts file. I logout of the server and ssh back with a new port and temporary known_hosts file.

$ ssh -o "UserKnownHostsFile kh2" host.example.com -p 2222

I get a different new fingerprint prompt and also accept it. I logout of the server then copy this fingerprint to my existing known_hosts file:

$ cat kh2 >> .ssh/known_hosts && rm kh2

I can now ssh to the same host with different ports without warnings about man-in-the-middle attacks.

$ ssh host.example.com
$ ssh host.example.com -p 2222

Version controlling my home dir

For awhile now I've noticed that things in my home dir aren't setup optimally for my work flow. I've been trying to run with SELinux enabled and I run into development problems when I try to run web applications out of my home directory. Various parts of my home directory are version controlled separately due to the software projects they're part of, but not as a whole.

What I'd like is to setup some other place for "projects" (Mozdev code and docs, other software projects, RPM building, etc) and then version control my home dir.

The problem is I'm not sure how much this helps me. Things like IM clients still are going to want to log things to ~/.somedir/log which is evil. SELinux contexts for files in /home/myprojects is still going to be wrong; I'm not going to be able to run webapps out of there, either. Moving my docs out of my home dir might be a pain due to xdg-user-dirs needing to be setup correctly to point at the document dir

I'd really love to have a lean, mean homedir that is version controlled that I can port around between boxes. Have people attempted this before? What about the above problems?

Program your Logitech Harmony remote in linux

Concordance has finally made it's way into Fedora!

I've been maintaining packages separately from Fedora for awhile now; the review has been there since October 2007! Unfortunately there were some legal concerns about the trademark over the name of the project ("harmony") originally, then when 0.20 was finally released with a new name the package required some work since the library was split off from the cli.

At any rate, if you need to program your remote from Fedora, all you need to do is:
yum install concordance

You can then login to the Logitech Harmony website and use the program to update your remote (even firmware on some models)!

Incremental compressed backups with perms and no root keys

I've looked for incremental backups that don't require root before, but I finally found it. Similar to rdiff-backup is a program called duplicity. It uses tar, gpg, and the rsync algorithm to store encrypted differential backups.

duplicity needs to be scripted a little bit to automate the backups. I came up with a script that will backup certain directories fully once a week and incrementally the rest of the week. I also disabled the encryption since the transfer method is secure (ssh in my case) and the end server is trusted. If you were using Amazon's S3 service or a public FTP share you might want to tweak this a bit.

/etc/cron.d/backup

32 3 * * 1-6 root /usr/local/sbin/backup.sh<br />32 3 * * 0 root /usr/local/sbin/backup.sh --full

/usr/local/sbin/backup.sh

#!/bin/bash<br /><br />v=&quot;-v0 --no-print-statistics&quot;<br />cmd=&quot;&quot;<br />duplicity=&quot;duplicity&quot;<br />opt=&quot;--no-encryption --volsize 100&quot;<br />dest=&quot;scp://taonas@thor.home.silfreed.net&quot;<br />maxage=&quot;1M&quot;<br />PASSPHRASE=<br />export PASSPHRASE<br /><br />for arg in $@; do<br />    [[ $arg == &quot;-v&quot; ]] &amp;&amp; v=&quot;-v4&quot;<br /> [[ $arg == &quot;--full&quot; ]] &amp;&amp; cmd=&quot;full&quot;<br />done<br /><br />dcmd_backup=&quot;$duplicity $cmd $opt $v&quot;<br />dcmd_age=&quot;$duplicity remove-older-than $maxage $opt $v --force&quot;<br /><br /># backups<br />$dcmd_backup /etc $dest/etc<br />$dcmd_backup /var/log $dest/var-log<br />$dcmd_backup /home \<br />     --exclude /home/silfreed/tmp \<br />    --exclude /home/silfreed/src/silfreednet/tmp \<br />    --exclude /home/silfreed/src/workspace \<br />  --exclude /home/silfreed/src/mozdev/workspace \<br />   --exclude /home/silfreed/.thunderbird/7g3nnt02.default/ImapMail \<br /> $dest/home<br /><br />mysqldump_dir=/tmp/mysqldump<br />mkdir $mysqldump_dir &amp;&amp; \<br /> chmod 700 $mysqldump_dir &amp;&amp; \<br />     mysqldump -u root  -A &gt; $mysqldump_dir/mysqldump.sql &amp;&amp; \<br />      $dcmd_backup $mysqldump_dir $dest/mysql<br />rm -rf $mysqldump_dir<br /><br /># age out paths<br />for path in /etc /var-log /home /mysql; do<br />     $dcmd_age $dest$path<br />done

Dear Lazyweb: Garmin Forerunner 50 linux support

I'm thinking about getting a Garmin Forerunner 50 but am concerned that the ANT USB thing won't work with Linux. I've seen a thread about ANT development drivers going into the kernel, but I haven't found any firsthand experiences. Anyone with any information about it? If it helps, the Garmin Forerunner 405 supposedly uses the same ANT technology.

All your boxes are upgraded to F8

I finally have all my boxes upgraded to F8 (via yum; viva la Live Upgrage SIG!). It really wasn't as bad as I anticipated, and in fact, probably much better than my anaconda-based upgrade for F7 since I had all my third-party repos enabled for the upgrade and didn't have many stragglers after it was all done.

Two of my home servers were still running FC-6 (!!). These boxes presented some problems getting upgraded since they were running some more esoteric package sets. My one server gave me fits of problems with getting unmet dependencies for packages that clearly existed. This one was an x86_64 box and was giving me a "missing dependency" for glibc.i686. /me shrugs. "yum upgrade" rather than "yum upgrade yum* rpm*" seemed to get it fixed. I did have to remove a number of packages beforehand, but that didn't bother me much.

Overall I am very impressed with the F8 release. My statements earlier about Fedora not having enough direction were apparently completely without merit, or were just the result of not having enough first-hand experience with what had been going on. Hopefully I'll be able to give a bigger hand to development for F9 and keep the great work going.

Differential incremental backup solutions that don't require "root"

I've been using rdiff-backup for several windows computers recently and it has worked out well. I started looking into using it for linux, but ran into a small problem.

Transfers of the backups between computers assumes root access on the destination.

This is a big problem for me; there's no reason to need root access on the destination server to save a backup of one system to another. The reason it's needed is to preserve UIDs and GIDs and set arbitrary attributes.

I don't have this problem with my tar-based backups.

Ideally, I'd like a solution that provides:

  1. Differential backups - only backup what has changed
  2. Incremental backups - backups of the same directories only records what's changed between runs
  3. Preserves permissions in some kind of manifest - this way I don't need to setup root ssh keys between servers
  4. (optional) compresses the backup

So lazyweb, do you know of this solution?

Disable console blanking

echo setterm -powersave off -powerdown 0 -blank 0 >> /etc/rc.local

Logitech Harmony software for Linux

I found a new project that looks very promising for loading config for my Logitech Harmony 880 remote. Naturally, I built Fedora 7 i386 packages for it. I'd really like to get more codes programmed on the remote for controlling my MythTV box, and with having software for Linux so I don't have to boot to Windows all the time, it will be much less of a pain.

What Fedora needs is some direction - and online upgrades

Some of the biggest problems I see in Fedora these days are the following:

  1. Lack of direction

    Since Fedora is a distribution for developers by developers in order to test out new technology, the main tree ends up being a hodge-podge of whatever each individual maintainer feels like working on. Unless there is an individual or small group of individuals that desires things to be a certain way, then you have to abide by those rules (take for example, no kmods in Fedora).

    If Fedora had a better sense of what it should be doing it would polarize developers along the same path rather than everyone working for their own goals.

    For example, I haven't seen any real ground-startling features for Fedora 7 or 8. Sure, Fedora 7 had "the merge", but that mostly was for maintainers benefit (arguably) and again was driven by very few people. For Fedora 8 one of the "big features" was supposed to be a rework of the init system, but that's again been pushed off. Instead we get features like "online desktop" and "NetworkManager". While these do improve user experience, they are very small nitches that don't involve developers for all of Fedora - just tiny groups of people. Everyone else is left to do whatever they want, so long as they don't piss off one of the groups that are in control.

  2. Upgrades aren't supported - All hail Anaconda - the great upgraderinstaller!

    Fedora (and RHEL/CentOS) needs to be able to upgrade online, using a tool like yum (or smart, or apt - I don't really care which tool it is - right now it seems smart could handle it better than yum). Most of the other distros can do this without problems (Gentoo, Ubuntu, Debian).

  3. No optional featuresets for packages - You have packages that either include the feature or do not.

I'm not sure how to fix the problems, but I do know that Fedora needs to make the following happen:

  1. Make upgrades work - Even with Anaconda it wasn't possible to upgrade from FC-6 to F-7 without broken packages - an online upgrade would have arguable gone better, but it "wasn't supported". "yum upgrade" should work like "apt-get upgrade" between versions.

    With the rolling release like we've been having the past couple versions, it's very easy for someone with an up-to-date release have newer packages than are available with the release of the next version.

    Case in point: FC-6 had KDE 3.5.7, but F-7 shipped with KDE 3.5.6. Immediately after release 3.5.7 was available as an update for F-7, but an upgrade from FC-6 to F-7 wasn't supported due to libata changes (hda->sda). This stuff needs worked out so that you can update from one version to the next easily.

  2. Optional package features - Offer packages that can somehow enable/disable features based on what other packages are installed. Be able to install package foo that has either mysql or postgresql support, or both - even if they're compile flags. That is to say, if I have mysql currently installed, and I install foo, I get foo w/ mysql. Or if I try to install foo without mysql or postgresql, provide some way of saying that they can be supported if you install them.

Both of these problems are hard. They require coordination across all of the package maintainers. Perhaps they're features that could polarize the maintainers to a common goal and make Fedora better.

CentOS upgrade success

Since yum isn't capable of downgrading the correct package, I decided to try smart.

And after a couple hours (literally) of computing transactions, it worked.

Now, there's definitely some cleanup to do, but it's pretty good. Unfortunately smart isn't available for RHEL 5 yet (I've emailed rpmforge to request it), but things are looking pretty good.

The biggest problem is that yum doesn't work. After quickly fixing the python-elementtree package, yum now says:
# yum clean all
Loading "installonlyn" plugin

Could not find any working storages.

Fortunately, here's how to fix it.

For me it was:
# rpm -Uvh sqlite-3.3.6-2.i386.rpm python-sqlite-1.1.7-1.2.1.i386.rpm

And here's the config I used for smart:
smart channel --add "Silfreed.net" type=rpm-md priority=-3 baseurl=http://www.silfreed.net/download/repo/rhel/5/i386/silfreednet -y
smart channel --add "Dries" type=rpm-md priority=-5 baseurl=http://ftp.belnet.be/packages/dries.ulyssis.org/redhat/el5/en/i386/dries/RPMS -y
smart channel --add "Dag" type=rpm-md priority=-5 baseurl=http://apt.sw.be/redhat/el5/en/i386/dag/ -y
smart channel --add "EPEL" type=rpm-md priority=-2 baseurl=http://download.fedora.redhat.com/pub/epel/5/i386 -y
smart channel --add "CentOS Base" type=rpm-md baseurl=http://mirror.centos.org/centos/5/os/i386/ -y
smart channel --add "CentOS Updates" type=rpm-md baseurl=http://mirror.centos.org/centos/5/updates/i386/ -y
smart channel --add "CentOS Addons" type=rpm-md baseurl=http://mirror.centos.org/centos/5/addons/i386/ -y
smart channel --add "CentOS Extras" type=rpm-md baseurl=http://mirror.centos.org/centos/5/extras/i386/ -y

CentOS upgrade time take 2

Figured I'd try upgrading again; this time from CentOS 4.5 to 5. When 4.5 came out I quickly tried just pushing the new centos-release and centos-release-notes packages onto the server and do a 'yum upgrade', but it failed trying to meet dependencies (kernel and kudzu, I believe).

So now it's coming time to do some maintenance to Argo and I'd like to upgrade to CentOS 5 finally. I did some research, and it doesn't sound any easier than before.

First, some packages in CentOS 4.5 are newer than what's available in 5.0. This means even anaconda will have problems (I've experienced this before upgrading from an updated FC6 to F7). That thread basically says it's probably easier to wait for 5.1 to come out, then update before 4.6 comes out.

Then there's the problem of yum not working after and upgrade.

And of course there's the guy who had various breakage doing the update manually.

So I'll probably try to do the upgrade manually on my firewall first just to see how it goes. I guess the worst-case scenario is that I have to install CentOS 5 onto some new drives for Argo, then copy the data over from the existing drives. Not the most ideal upgrade path, but it's an upgrade path.