July 25, 2006
Old Hard Drive: R.I.P.
Well, it finally happened. Over this past weekend, while working to finish migrating data from a failing drive to a new one, I found the end of the rope. I have been unable to recover enough data from the old drive to be able to do anything useful with it and since it was still part of the LVM Volume Group (or VG), the problems it was causing were just too much to deal with.
So, I wiped all of the hard drives for the
data VG. I am rebuilding everything from my backups. Ha, I'm sure I got some of you. You thought that perhaps I was going through all those gyrations trying to save my data because I didn't have any backups. Well, I think I'm a little smarter than that :) .
Seriously, though, I had backed up all the stuff that would be difficult to replace/reconstruct as well as all the irreplaceable data. I had things like several Linux distributions' files and .iso images, to make it easy to burn discs and to do network installs (either whole installs or just to have the packages handy). There were mirrors of updates and other software. All of that is easy to replace, though, so I didn't bother backing it up.
The next thing I would like to do for my file server is to pick up another 320GB drive so that I can pair it with the one I have now and migrate everything into an LVM on RAID1 set. Later, I'll add more 320GB drives and convert the RAID1 to RAID5 or RAID6. In the meantime, the 45GB drive is now sitting in the target pile, waiting for the next target shooting outing and the 120GB drive is going to be an online backup store in the file server. That leaves me with only 299GB of storage on my file server.
How will I ever get by? :)
Posted by lamontp at 10:44 AM
July 17, 2006
The LVM Advantage: Migrating Data
The Linux Logical Volume Management system (LVM) is a wonderful piece of technology. In a nutshell, you take disparate storage devices and place them under the control of LVM in a container called a Volume Group (or VG for short). You can then create Logical Volumes (or LV for short) in the VG, which can be formatted, mounted and used, just like any block device. The LVM code will manage everything.
Sometimes, when I'm first introducing students to LVM, they ask me something like, "Why would I ever want to do this? Add an extra layer of complexity and overhead, just to get some fancy looking partitions? No thanks; I'll stick to plain partitions."
First of all, LVM is very efficient, introducing almost zero overhead for most normal operations. Yes, there is a little, but it can be difficult to meassue, let alone notice. Second, there are many irritating storage management tasks that are downright child's play to accomplish with LVM in place but quite difficult with just plain partitions.
For example, you can add additional storage and move existing data into it, clearing out old devices without having to shut anything down. Let your programs and users continue to access and manipulate the data. In most cases, they won't even be able to tell your moving stuff to a new hard drive. Of course, if your system doesn't support hot-plugging of new hard drives, you will have to power down to add the drive physically to the system and again to remove (an) old one(s).
This is the case with my home file server, so I shut it down to add the new drive (read my recent posts to learn about that adventure). But while I'm moving the data, the entire system is running and available as normal. I can read and write data all over my LVM LVs without worry.
Here's the sequence of events:
- Insert new drive into the system; If you couldn't hot-swap, boot it back up
- Get to a
rootshell and create an LVM partition on the new drive (that's partition type 0x8E)
/dev/sdb1to the correct device, of course)
- Add the new storage to the VG of your choice (they can only be in one VG) by running
vgextend vg0 /dev/sdb1(change
vg0to an appropriate value)
- Enjoy your expanded storage!
OK. In my case, I want to move everything from all old drives to the one new one, so I'll use a special form of the
pvmove command. If I wasn't going to be removing all of the existing devices and was keeping more than just the one drive, I would not specify the second parameter to
pvmove. Here is the output from my running of them: :
time pvmove /dev/hde2 /dev/sda1/dev/hde2: Moved: 0.3% /dev/hde2: Moved: 0.7% /dev/hde2: Moved: 1.0% /dev/hde2: Moved: 1.4% /dev/hde2: Moved: 1.7% /dev/hde2: Moved: 2.1% . . . snip . . . real 72m23.484s user 0m4.812s sys 0m4.116s #
time pvmove /dev/hdg1 /dev/sda1. . . snip . . . /dev/hdg1: Moved: 44.2% /dev/hdg1: Moved: 44.6% /dev/hdg1: Moved: 45.0% /dev/hdg1: read failed after 0 of 1024 at 4096: Input/output error /dev/hdg1: read failed after 0 of 2048 at 0: Input/output error Failed to read existing physical volume '/dev/hdg1' Physical volume /dev/hdg1 not found ABORTING: Can't reread PV /dev/hdg1 ABORTING: Can't reread VG for /dev/hdg1
I'll talk about the errors in a minute.
As you can see, it can take a while to move that much data, so running them in a
screen session is recommended.
With the two parameter form of
pvmove, I'm telling LVM to migrate data off of the old drives only moving it to the new one. Again, except in these kinds of very narrow circumstances, it's better to only tell it the device to move off of and let LVM decide where to move the data to (it'll make things much more efficient that way).
OK, so there were some errors running
pvmove and it bailed out. Let's see what wer're dealing with:
# pvdisplay /dev/hdc1 /dev/hdc1: read failed after 0 of 2048 at 0: Input/output error No physical volume label read from /dev/hdc1 Failed to read physical volume "/dev/hdc1"
Uh, oh. Looks like I'm going to have a little difficulty getting this one fixed. I ran a few other commands looking at data LVs that are already mounted and it seems that the corrupted portion of the disk is not affecting any existing data. I did have a
/dev/data/up2date LV which I deleted during the installation of SUSE Linux 10.1 (I won't need that Red Hat specific item). Prior to the installation, it was not mountable and
fsck couldn't recover it because of drive problems. This was one of the reasons that I decided to get the new drive in the first place.
Hopefully, I'll find a way to verify whether or not there is any other data on that drive that I need to worry about. Bat as it's getting late, I think I'm about to head off to bed.
Just to finish things off, run
vgreduce /dev/hde2 (in my case). At this point, I can remove the 120GB drive and once I fix up and finish migrating from the 45GB drive, I'll be done. In my case, that means another powering down (so I'll wait till I can pull them both), but it is a home file server, so I don't mind.
It's very nice to have access to all of my data while I'm upgrading my storage system. There isn't even a performance penalty for other I/O, as the LVM code will submit the
pvmove I/O operations as low priority to the kernel's I/O scheduler.
Posted by lamontp at 11:15 PM
New Hard Drive? Test It First
As you may have read in my most recent entries, I am updating my home file server, replacing the two existing ATA data drives with a new larger, faster SATA drive.
The old drives are 45GB & 120GB in size, which works out to approximately 41GB and 111GB formatted capacity, for just over 152GB storage. The new drive is a 320GB model, which works out to just over 299GB formatted capcity.
Since the old drives are nearly full, I'm going to be moving a lot of data. As I'm using LVM, I can migrate the data to the new drive with ease, while I have everything online and in use, but I'll cover that later. But before I move the data over, I would like to know if the new drive is good. Although this technique won't tell you for sure (as in life, sometimes there is no certainty), but it's a good indication that things are OK and it gives the new drive a little bit of a workout.
I simply used the
badblocks command to test the entire drive, like so:
# time badblocks -b 4096 /dev/sda > sda.badblocks real 95m41.158s user 0m1.764s sys 3m0.631s
I ran that after connecting remotely to the server from work via SSH. I used
screen so that I could disconnect from the session, come home and reconnect later. When I did, I saw that the entire drive was checked in under 96 minutes. Not bad.
Of course, it's always a good sign when there are no error messages from
STDERR, but I'll have to check the
sda.badblocks created by my command:
# ls -l sda.badblocks -rw-r--r-- 1 root root 0 Jul 17 16:38 sda.badblocks
As you can see, the file is empty. So, it looks like my new drive is just fine. That means I'm ready to copy all of my data over to it, but you'll have to read my next post to learn how I'm doing that without interfering with normal operations.
Posted by lamontp at 8:08 PM
July 16, 2006
PCI SATA Controller is Working
In my recent post about creating bootable CD images, I talked about the reasons why I was trying to build a bootable CD: I needed to install an updated BIOS to try and get my new ($19, BTW) PCI SATA controller to show up.
The new card is built using a
lspci did show and identify the PCI SATA controller.
I decided to try it again in the file server box, so I put the whole thing back together and fired it up, entering a rescue mode using a RHEL 4 ES DVD that was lying close by. Nothing from
lspci, so I decided to button up the box (I had it sitting on the back of my desk while working on it) and replace it in the server stack so that I could at least use the storage I had.
Boy, was I in for a pleasant surprise.
"Oh well,: I was thinking. "I tried. It just doesn't look like it's going to work on that motherboard. I guess I'll have to start planning on building the next file server. Let's see, I want to have PCIe in it and I want it to be and AMD Opteron, probably supporting multi core processors...2 or 4 ... or the new 8-way from Tyan? Yeah, that would be cool! Of course, RAM isn't an issue as any of the boards that I could use to build that system will support more than I need for a home file server." Wow, all that in the time it took me to get the panels back on and put in most of the screws. Then it hit me, "I forgot to put that Adaptec AHA-3940UW card back in. So I pulled off one panel, remounted the "omitted" card, replaced the panel and finished putting in the screws.
A few moments later, I had the server snugly back it's shelf (I'm not rack mounting, yet :( ) with the LVM cables plugged back in. I powered on the monitor and switched the LVM to the file server while pressing it's power button. I needed to select the second kernel to boot (I still haven't fixed the "newest" one), so I sat and watched. That's when it happened.
All of a sudden, there it was, on the screen ... the PCI STAT controller card's BIOS showing it had found the one drive plugged in and that there were no RAID arrays configured (it has some kind of "on-board RAID0/RAID1 capability, which I'm sure is actually FRAID). "It's ... working!?" Wow, cool. I hit
<CTRL>+<ALT>+<DELETE> on the keyboard. "Let's see if it does that again. It did.
It's always nice when hardware starts working, but sometimes, it can be a little frustrating not knowing for certain why it suddenly started working. Oh, well, it's running! I'm not complaining one little bit.
The only thing I could think of was that after having initialized the card for it's first time in the dual Opteron box and replacing it in the file server system, it was still in the same slot, so the BIOS didn't try to "re scan" the PCI bus fully as it saw the same device list and, therefore, skipped the card when I had tried the rescue environment and the
lspci command didn't show it listed. But when I added the SCSI controller back in (there are no drives on it, but I sometimes hook up external devices), the BIOS did rescan everything and decided it knew what to do with the SATA controller. Well, that's the only theory I have at this point.
So, I rebooted with a SUSE Linux 10.1 CD. When I reached the partitioning, it saw all 4 drives. Yipee! So, I configured the partitioning, reformatting all the partitions that had RHEL 3 on the
system LVM VG and creating the mount points for all the
data VG's LVs. In fact, I started writing this post before it started installing files and it's already finished, rebooted and been sitting there for at least 5 or 10 minutes until now, when I checked on it. I finished up the installation (there's going to be some more housecleaning and services configuration to do, of course) and then returned to finishing this post.
Posted by lamontp at 11:23 PM
Creating a Bootable CD Image with mkisofs
I've been working on my home file server this weekend. It has had a little over 150GB of storage for about 2.5 years, running RHEL3. About 2 months ago, I picked up a 320GB SATA drive and a PCI SATA controller to upgrade the system.
Unfortunately, there have been a few bumps and bruises on the way to getting the new drive working in my server. To help you understand some of the issues, here are the specs on that box:
- 700MHz AMD Duron processor
- Syntax SV266AD motherboard
- 1GB RAM, 1 DIMM from Crucial
- 4GB Quantum Fireball hard drive (used for OS with separate LVM VG)
- 45GB hard drive (Western Digital WD450AA)
- 120GB hard drive (Western Digital WD1200BB
In case you were wondering, it was only about $2 more to get a 1GB DIMM than a 256MB DIMM when I bought it and the 45GB & 120GB drives provide the shared storage under LVM.
Not a bad system, but not terribly modern either. For example, my problems begin with the fact that SATA is a newer technology than the Duron boards or, for that matter, RHEL 3 (because it sports the 2.4 version of the kernel).
I've had that Duron processor for over 6 years, but the original motherboard died in April of 2004. That's why I built my dual Opteron workstation (in May, 2004). I let the Duron chip sit for about a year before I bought the new motherboard, which required me to get new RAM as the old one was too specific to the memory controller on the old, now dead, board. I decided the newly resurrected Duron system should be my file server and the old file server hardware now serves as a firewall/router. I was able to just move the 3 hard drives (and a couple of NICs) to the new system and everything came up just fine; no OS reinstall. Man, I love Linux.
Anyway, when I went to install the new SATA hard drive that I had picked up, I decided that I should rearrange the other drives so that when I pull the 120GB & 45GB drives from the box, that there will still be really good airflow around the new drive. As I shuffled the 4 hard drives around in the 6 drive hanging cage (this case has room for another 5 5-1/4 inch and 1 3.5 inch drives), I realized that I was just going to have to do it again, so I arranged things such that I should be able to get the most air over the 4 drives. I'll just move the new one when I pull the old drives out.
But when I hooked everything back up, the system (by which I mean the BIOS) could not see the new SATA controller. To make matters worse, it could no longer see the 4GB system drive either. At that point, however, I had to leave for a few business trips and didn't get around to working on the box again until this past weekend. It turns out that I merely had to reverse the IDE ribbon cable going to the 4GB drive and all is well, with it.
Then, when I booted the system up with the 4GB drive back in view, I got a kernel panic. Looks like the latest kernel which I have installed for RHEL4 (2.4.22-37.0.1) can't find the root partition at all. That was quickly fixed by editing
/boot/grub/menu.lst in a rescue environment; it turned out that there was no value for the
root= parameter on the
kernel line. After rebooting I get a little different kernel panic. It seems there is misconfiguration in this kernel' initrd so that it can't do the pivot_root(8) properly (seems like the
old_dir it's trying to use doesn't exist. So, I boot into the next previous (2.4.22-37) kernel, which also fails since it can't find anything in the
OK, another trip into the rescue environment and, sure enough, there are zero bytes of content in the
/etc/fstab file (/me shakes head). so, I reconstruct the basic lines needed to get the OS up and running, reboot again, and viola; I finally have a running system. I still had to add a few more lines to
/etc/fstab before everything was ready for normal operation, but that only took about 3 minutes.
Still, the BIOS doesn't see the new SATA controller. Perhaps a BIOS update will get things going. I'm thinking that perhaps the BIOS just doesn't have the first clue what kind of card it is, so it doesn't initialize it into the PCI bus. To test this theory, I booted from a handy SUSE Linux 10.1 CD and checked some things out in the installer's partitioning tools and with the command line. No dice, there just is no
/dev/sda device appearing at all.
All right, then, let's update the BIOS. Of course, it's a DOS program, so I decided to burn a CD as I don't even have a floppy drive installed on this box and didn't really want to dig one out. My first attempt at creating a bootable CD from a DOS floppy I have used for about a decade failed to boot. I did a little digging to find out what I needed to do to properly create a bootable CD with
mkisofs. I came up with this command line:
mkisofs -r -b workdos95.img -c boot.cat -o SV266AD.iso SV266AD-BIOS/
The trick is that the
workdos95.img file (created by running
dd if=/dev/fd0 of=workdos95.img bs=512 with the working boot floppy in the drive) has to be placed in the
SV266AD-BIOS (in my case, since that's what I named it on my hard drive while I was building the CD) directory. That isn't exactly clear from the
mkisofs(8) man page.
Burning the resulting ISO9660 image to a blank CD-R produced a working, bootable DOS CD. The
oakcdrom.sys DOS driver even works with my Pioneer slot loading DVD drive. I was able to change to the
G:\ drive (the CD) and run the BIOS update utility. Sadly, rebooting showed that the newest BIOS still didn't bring up the PCI SATA controller. Nuts.
To satisfy those scratching their heads over the
workdos95.img file, WorkDOS is a name I have used for utility boot disks that I have built. They are simply a DOS boot floppy with almost all of the main
C:\DOS\ files, a couple of other tools and some CD-ROM & NIC drivers included on a 1.44MB floppy. They have been quite useful over the years, though I haven't used one in a long time.
The next test is to pull out the card and drive, plug them into my dual Opteron box (which has 4 SATA ports on board) and make sure the drive is good and see if the card is visible. I'm working on that now.
Perhaps I have a bad card. Maybe this BIOS will never be able to see it. Either way, the next iteration of my home file server will make use of a way cool SATA controller; the 3ware 9590SE-16ML. Oh, yeah :) !
Posted by lamontp at 7:55 PM
December 6, 2005
Notebook Hard Drive Jugling
As many of you already know, my notebook's hard drive thought it would be fun to watch me squirm around, wondering if I would lose some of my data.
Last week, when I arrived in Sacramento and booted up, I discovered a few issues with unreadable sectors on various parts of my hard drive. It also turned out that Windows XP would not boot (it would just reboot during boot up). There was a smattering of other sectors that were completely unreadable.
I ordered 1 Hitachi 80GB 2.5inch 7200rpm replacement, which arrived at the office on Thursday. I picked it up late Friday evening on my way home from the airport and spent most of Saturday and Sunday (until my 3:10pm flight to San Diego) rebuilding. I took the old drive out and placed it in my USB 2.0 (but the notebook only does 1.1 :( ) external drive carrier. The old drive is a Hitachi 80GB 2.5inch unit that, I think, is 5400rpm. Luckily, it should still be under their 3 year warranty, too.
Next, I manually partitioned the new drive to exactly match the partitioning that I had on the old drive (I wanted the same setup in this case). Then, I tried to install Windows XP Professional into partition 4. It failed on the formatting step, which was interesting since my earlier attempts at installing Windows XP directly failed much earlier in the CD boot process. So, I did the same thing I did before, installed Windows 2000 and then immediately upgraded to Windows XP Professional.
Next, I reinstalled World of WarCraft and downloaded the 252MB patch (rolling up a year's worth of stuff). There were a couple more cycles of launching and downloading minor updates. This whole process took about 3-1/2 hours. I took a break from rebuilding the system and spent some time playing WoW with my wife :) .
At about 11pm on Saturday evening, I got back to work and tried to install Fedora Core 4. I tried to do this via NFS but it didn't like me. Also, I found out this notebook will not boot from USB drives. So, I burned the DVD ISO to my only blank DVD and did the install from there.
I went with a minimal install, stripped out about 58 or so packages I don't need/want and then started installing the stuff I wanted from there. I made some thourough notes this time, including creating a Kickstart file that I could use to redo the base install (including stripping those 58 or so packages).
Before I could start copying files, I had to change the Volume Group name of the LVM VG on the old drive. I booted from the DVD into the rescue environment and ran the lvm command after creating /dev/hda with the mknod command. I then used the vgrename command to change vg0 to vg1. Then I quit from the lvm command and used mknod to create the /dev/sda node and ran lvm again. Then I vgrename'd:
vgrename vg0 old-corsair
vgrename vg1 vg0
I could then reboot and start mounting things without conflicts between two volume groups with the same name. It took a long time to copy over my home directories and some other parts of the filesystems from the old hard drive, so I let that run overnight.
In the morning, I noted that about a dozen or so email messages had been lost due to unreadable drive sectors. Since I use IMAP for almost all of my email account, that's not a problem, as I pulled down copies of the missing mail when I reconnected. I finally got everything I needed in place and copied over my encrypted partition just before I had to leave for the airport. I was able to stop at my office on the way to the airport and print out boarding passes (which bought me a few more minutes :) ), my itinerary and a couple of other things that I needed for this business trip.
Sunday and Monday evening, I continued to migrate all the data from my old drive to the new one. Tonight, I should be able to get all the databases (both MySQL and PostgreSQL) migrated over (I copied the files already, but I have to get the software up and running). This coming weekend, while at home, I will reinstall the rest of my Windows games and copy over the saved game files.
After all this is done, I will securely wipe the entire drive using shred and see if I can get it to remap any of the bad sectors, which I'm doubting it can do at this point. Once that is done, I'll see about getting an RMA replacement. That way, I can have an 80GB 2.5inch USB hard drive to take around with me :) .
You should note, I was very lucky this time. Although most of my sensitive data is archived to other systems, and all the source code is in one CM or another, I was able to get the data off the old, starting to fail drive. There were some files that would have been lost if not for that (thankfully, not many that were of any real importance). This episode has me really wanting to get my storage capabilities at home expanded. If only I could find some of this "money" stuff that people keep telling me about :) .
Morale of the story: THIS IS A REMINDER TO BACK UP YOUR FILES. Especially on your notebooks. Also, you should have a drive carrier for a 2.5inch drive that let's you hook it up to USB,. That makes life much more convenient with a notebook.
Posted by lamontp at 2:21 PM
June 18, 2005
Some "Security" Measures are Just Plain Worthless
On Thursday, June 17th, 2005, Stuart and I placed an order with Crutchfield for two pair (one each) of the Bose Quiet Comfort 2 noise canceling headphones. However, getting Crutchfield to actually ship the order was just a little more difficult than usual for online ordering.
Over the past several years, I have placed many an online order. These have included purchasing RAM & other computer parts for several systems, two Sharp Zaurus handheld computers (PDAs) and other assorted items. As I can not guarantee that someone will be at my house at whatever moment a delivery service pulls up, I always have these orders delivered to my office (as do most of my co-workers). The problem was that Crutchfield had one of their agents phone the bank (who issued the credit card I had used) to verify the phone numbers that I had given them were listed on the account. My office number was not so they were holding the shipment until they could get me to fix it. This was after the bank had already approved the transaction (in fact, the money had been sent out to their merchant account by the time I first contacted the bank about this).
This was the first time I have had difficulty getting a merchant to ship an order.
The process of approving a credit card transaction involves the merchant providing the card processing system information that I submitted. This includes my name and both the billing and (if the merchant chooses to do it) the shipping addresses. If they do not match up or the shipping address is not listed with the bank as an authorized ship-to address on the credit card account, then the merchant will not get an approval for the transaction. Merchants are not permitted to verify phone numbers via the electronic submission methods and are, in fact, not required to even collect any phone numbers.
All entities which issue credit cards (more generally, those who issue "credit instruments") are required to maintain a verification phone line for merchants to use in the event of complications or failures in the normally employed authentication method(s). So, it turns out that Crutchfield as a matter of their own corporate policy will always phone your bank to see if all the phone numbers you provided are on the account. They have to do it this way, since they are not allowed (there is no way to) to submit phone numbers for verification electronically. Since no other merchant with whom I have done business has ever done this, I never bothered to provide the bank with the extra phone numbers. They only had the house number and my wife's old place of work.
After talking to the bank on Friday, I told Crutchfield that they should be able to verify the phone numbers. They called back late Saturday afternoon and told me that they still could not verify the phone numbers. From the next conversation with the bank, I discovered that the new computer system (which had just been put in a week before) used by the merchant verification call center at the bank will only show them the one main phone number and no others. So I phoned Crutchfield and told them the old number that they could not use to get in touch with me, but that would verify just fine.
Crutchfield upgraded the shipping for free to ensure that Stuart and I would have our headphones by the end of the week. They should ship out on Monday.
To summarize the security implications here, Crutchfield has implemented a "security" process as a matter of policy that fails. It's not surprising that it fails, since the credit card processing system was not designed to handle this extra mechanism (in fact, phone numbers are specifically excluded). In this instance, the extra process failed and caused delays in shipping that were grossly exacerbated by their further delays in contacting me regarding the problem.
This incident was a security failure because:
* In the end, they required false information in order to get past their "safeguard".
* It took three (3) days longer.
* It cost them extra money, both in the time wasted by their personnel in conversing with me & the bank and for "making it right" with the customer by upgrading the shipping.
* I would guess that they spent more than their profit margin, resulting in a net loss for the transaction.
* It frustrated the customer, making it less likely that I would do business with them again.
* It damaged the customer's faith in their business.
* It damaged the customer's faith in their security
Posted by lamontp at 6:54 PM
May 27, 2005
Updating the IPW2200 Driver & Firmware
This driver is fairly easy to install and use.
Before going over the steps needed to get these cards working, I should tell you that I have only done this on Fedora Core. These instructions should work unmodified for RHEL3, RHEL4 and any distributions derived from Red Hat, but I can not guarantee that you will not need to tweak this a little bit. In other words, "Your Mileage May Vary."
With that out of the way, let's get into the setup.
Download the driver and firmware filesIt is important to read through the information related to each new release of this driver. These cards do not have their firmware stored on-board; it is loaded into the card by the driver when it starts up.
I did not pay sufficiently close attention when I downloaded the 1.0.4 driver. I have been running 1.0.3 since I installed the IPW2915 in my notebook. For the 1.0.4 release, updated firmware must be installed. I failed to notice that requirement and had to go get the files using another computer (thanks to Evan for the use of his notebook for this) and place them on a USB KeyChain Drive. The ipw2200 1.0.4 driver will not work without the corresponding updated (version 2.3) firmware files.
Extract the Source Code & Firmware FilesAfter downloading all the needed files, extract the tarballs in a working directory. Some people prefer to use /tmp/ for such things. I like to use my ~/src/ directory. Whereever you choose to extract the tarballs, these commands will do the trick:
$ cd ~/src/ $ tar -zxf /path/to/downloaded/files/ipw2200-1.0.4.tgz $ mkdir ipw2200-fw-2.3
A Little HousekeepingAlthough the code is ready to be built, there are a couple of housecleaning chores to take care of, first. As root (I prefer to use a separate shell, so I can just bounce back an forth between the non-priviledged user shell and root, only running commands as root that absolutely have to be), go into the wireless network driver modules directory for the currently running kernel:
# cd /lib/modules/$(uname -r)/kernel/drivers/net/wireless/
Delete the existing IPW2200 and 80211 files & directories.
# rm -R 80211* ipw2200*
These files must be eradicated in order for the new driver to work.
Build the DriverReturn to the unprivileged user shell, change into the driver source code directory and build the driver:
$ cd ipw2200-1.0.4 $ make
Install the DriverAfter the build completes, use the root shell to install the driver:
# cd ~username/src/ipw2200-1.0.4 # make install
Make the Firmware Files Available to the DriverBefore trying to load the driver, you have to place the firmware files in the proper directory so the driver can find and load them. Remove any old firmware files, if present, as well:
# rm /lib/firmware/ipw*fw # cd ../ipw220-fw-2.3 # cp ipw*fw /lib/firmware/
Use the New DriverThat's it. The updated ipw2200 driver is now ready to function. If you were already running an older version of the driver, unload it first, then load the new driver. Here is the way I prefer to do it:
# rmmod ipw2200 # ifup eth1
The new driver is now in use. Enjoy!
Lamont R. Peterson
Posted by lamontp at 12:14 PM
May 4, 2005
802.11a is Mine
On Friday (April 29, 2005), a shipment arrived at the office for me. It was my new Intel IPW2915 Mini-PCI 802.11a/b/g card. It took only a couple of minutes to remove the old IPW2100 card and replace it with the new one. Now, my Dell Inspiron 4150 is the only system at Guru Labs that has 802.11a support.
After Dax provided me with the files needed to bring the 2915 online (it uses the same driver and firmware files that the IPW2200 he has in his notebook does), I was able to bring up the new NIC and start connecting to wireless networks. However, it defaulted to the 802.11g connection over the 802.11a after I set up the configuration file (/etc/sysconfig/network-scripts/ifcfg-eth1 on Fedora Core 3.
After using the iwpriv command to set the card to use 802.11a, I began looking for a directive I could add to my /etc/sysconfig/network-scripts/ifcfg-eth1-work file which would cause the NIC to be brought up on the 802.11a when I configured the connection using ifup.
The iwconfig command reported that channel 149 was being used when the NIC was on 802.11a, so I decided to try setting that value in the configuration file. It worked. Now, I simply run "ifup eth1-work" and I associate with the 802.11a access point here at Guru Labs. The bandwidth is great; typically better than 802.11g. It's good to be the king ;).
Lamont R. Peterson
Posted by lamontp at 11:44 AM
March 30, 2005
64bit fglrx Failures
At this point, the 64bit fglrx driver for my ATI Radeon 9800Pro card is almost useless. It rocks for 2D, but all 3D programs cause the system to hard-lock. I can not even ssh into it to kill the X server.
One evening soon, when I have an hour to work on it, I am going to try to get some kind of core dumps out of the driver, if I can. If that does not work, then I will ask ATI for a debugging version of the driver and send them the dumps.
Hopefully, there will soon be a newer version of the fglrx driver for AMD64 that works.
Lamont R. Peterson
Posted by lamontp at 5:21 PM
March 13, 2005
fglrx Dual Head Strangeness
On Friday evening, I decided to work on getting dual head support running using the newly installed fglrx driver. It only took a couple of tries and I had it running.
There is, however, one odd thing about it; the monitors are swapped. The monitor hooked up to connector 1 on the card is getting the configuration signals meant for connector 2, and vice versa.
I spent a couple of hours this afternoon trying to get it to swap the outputs back to the correct order, to no avail. I could just set things up the other way around, but then the second monitor would be the "primary" which just feels weird. For now, I am just swapping the connectors back and forth when I boot to Windows.
Posted by lamontp at 6:37 PM
March 11, 2005
64bit Dual-head 3D
Sweet success is mine!
Finally, I have a had a little time at home to try out the 64bit ATI fglrx drivers for my dual-head Radeon9800Pro card in my dual Opteron workstation. It works very well. First try, too :).
My first interactive 3D program? Chromium.
Just a couple of meaningless benchmarks for those who know what they are:
3809 frames in 5.0 seconds = 761.800 FPS
4325 frames in 5.0 seconds = 865.000 FPS
4313 frames in 5.0 seconds = 862.600 FPS
[I had a processor load meter running at the same time and it did not even register any CPU activity during this test.]
18434 frames in 5.0 seconds = 3686.800 FPS
21609 frames in 5.0 seconds = 4321.800 FPS
21602 frames in 5.0 seconds = 4320.400 FPS
21605 frames in 5.0 seconds = 4321.000 FPS
21608 frames in 5.0 seconds = 4321.600 FPS
21604 frames in 5.0 seconds = 4320.800 FPS
21606 frames in 5.0 seconds = 4321.200 FPS
21605 frames in 5.0 seconds = 4321.000 FPS
21604 frames in 5.0 seconds = 4320.800 FPS
21611 frames in 5.0 seconds = 4322.200 FPS
21602 frames in 5.0 seconds = 4320.400 FPS
21603 frames in 5.0 seconds = 4320.600 FPS
21605 frames in 5.0 seconds = 4321.000 FPS
21605 frames in 5.0 seconds = 4321.000 FPS
21606 frames in 5.0 seconds = 4321.200 FPS
[This time, one of the CPUs registered about 80% utilization during the test.]
Well, I'm off to install 64bit Unreal Tournament 2004. Tee-hee :).
Posted by lamontp at 7:30 PM
March 9, 2005
Finally! 64bit 3D Video for Linux
At home, I have a decent workstation, which I built last May (2004). I bought all the parts and put the whole thing together myself, like I always do.
However, it has not been all coconuts and hammocks in paradise.
The one problem area that I have had to deal with on this system has been the video. Unfortunatly, ATI has not released an open source Linux driver for this card yet. What they have done for a while now is to release their Linux driver (minus a couple of bits that are encumbered by patents) under an open source license when they release the next version of a chip line.
Unfortunately, the Radeon 9800 is the list chip in that line so they have not pulled the trigger on that one.
Here are the specs:
- Tyan dual Opteron motherboard
- 2 x AMD Opteron 242 processors (1.6GHz)
- 2 x 512MB PC3200 DDR DRAM
- 1 x SATA 160GB 7200RPM Western Digital hard drive
- 1 x ATI Radeon 9800Pro 8x AGP graphics card
- DVD, CD-R/RW (fast, too)
As you can see, I did not skimp on the hardware, though I did not buy the most expensive version of (most) parts, either. A good compromise and a system that I am very happy with.
"No problem. ATI makes a great binary-only driver that does not taint the kernel. I'll just download and use that," I said to myself. Well, that's when the real fun began.
ATI did a really smart thing and made it so that you can build the binary-only driver for your version of the kernel. Their process gives the driver forward-compatibility; when a new kernel package (or version, even) is released, I can build the fglrx driver from ATI for the newer kernel. This is a very good thing. But to accomplish this with a "binary-only" driver, they built all the important stuff ahead, and the driver build process simply links it all together.
"How does that cause you a problem, Lamont?" you might ask. Well, those pre-built bits are 32bit. I am using a 64bit system and need 64bit bits. ATI did not have such a beast at the time I built my box.
"Why not install the 32bit X server so you can use the ATI driver?quot; Because, this is a 64bit system and I want it running 64bits. Besides, it dual boots and I play my games under WindowsXP since there are a couple I can not get to run under Linux at all. Once I have hardware 3D under Linux at 64bits, I will move all the games I can from Windows to Linux.
That day is now upon us. Recently, ATI finally made a 64bit version of fglrx (which supports the Radeon 9800Pro) for Linux using the Unreal Tournament 2004 to "test" the full capabilities of the system with. I think I read somewhere that there is a 64bit version of the Linux client for that game available. I hope so.
Posted by lamontp at 7:08 PM
March 8, 2005
My Ultimate Gaming Machine
Every gamer has their own definition of what exactly is "The Ultimate Gaming Machine." Of course, this definition evolves, little by little, every month.
For me, that definition begins with the word, "Notebook" (or laptop, if you prefer). Think I'm crazy? Read on.
The main reason I want to play games on a laptop is convenience. I really enjoy getting together with my buddies for a LAN party. Since I have had the largest monitors out of most of my gaming friends, we usually have held such parties at my place (also, since I have a good LAN, plenty of power & data cables as well as folding tables, a big file server with all the updates to the games already downloaded and a nice fat DSL connection )).
However, we still always ended up spending time getting some people's boxes running right.
Enter the notebook. These things are always traveling, so they are already set up to connect to whatever network is available. I do not have to haul around any heavy gear (my Princeton Graphics Ultra 20 weighs something like 80 pounds!).
With a notebook, I am set up and ready to go just as fast as I can pull it out of the travel bag, plug in the mouse, power and LAN, press the power button and finish booting.
Still, my dual Opteron system at home rocks when playing games. Guess I'll still have to have the LAN parties over there from time to time :).
Posted by lamontp at 7:41 PM
March 7, 2005
Notebook Linux Gaming Blues
However, some features of the chipset are not supported by the open source radeon driver. The one I've run into several times already is the lack of support for compressed textures. So I decided to pay ATI's website a visit and download their fglrx driver for this chip. No joy.
Unfortunately, their website no long lists any drivers for my chip. The driver upporting the Radeon 8500 and above is the only one available.
So then I decided to try Dell's support website. I pulled up the available files for Linux for the Inspiron 4150 notebook; no video drivers at all.
Maybe I will get lucky and find it somewhere out on the Internet. In the meantime, if any of you out there have ATI's fglrx Linux driver which supports the Radeon 7500 cards, please, let me know.
Posted by lamontp at 8:57 PM
February 26, 2005
Yesterday, during the last day of class, my notebook kept hard locking on me, over and over again. With a trip coming up next week, the prospect of fighting my through this new problem did not look very appealing.
After a little discussion, Dax and I struck a deal in which I have purchased his "old" notebook, a Dell Inspiron 4150 (1.9GHz Pentium4; 1GB RAM; Radeon Mobility 7500 video; 1400x1050 LCD; 80GB Hard Drive; MiniPCI wireless NIC).
After spending time installing Fedora Core 3 on here, I am now using the "new" notebook, which I have named corsair.
Installation went fairly smothly. I dropped out of Anaconda to a shell early in the installation process and created a 20GB partition at the end of the drive for WindowsXP Professional (which I will be installing later today). I want to be able to dual-boot to Linux & Windows.
I do almost everything in Linux, except for a couple of games that I like to play which I have been unable to get running under Linux, yet. I think that a notebook with great graphics makes for the ultimate gaming system, since I like to get together with friends and play. I will also be taking a couple of games with me while traveling.
This notebook is a step up from the HP Pavilion ze1210 (1.2GHz Athlon XP; 1GB RAM; Twister_K video; 1024x768 LCD; 60GB 7200RPM Hard Drive). After 2 or 3 weeks go by, once I am certain that I am not missing any data that I need from wraith (the HP notebook), then I will clean up wraith and set it up for Charlotte (my wife, in case you did not know that).
I am quite happy with this purchase. However, I expect that I will probably need to have the batteries rebuilt by the end of 2005. I believe that this system will be quite satisfactory for another year. At that point, I believe I will be looking to purchase an Athlon64 notebook or perhaps an Apple PowerBook.
Posted by lamontp at 2:54 PM