The problem is that grub doesn't recognize the raid 1 partitioned drive
It's supposed to, but it doesn't, so it can't mount it, so it can't find /boot/initrd... so it gives me error 15.
I learned a new neat trick about grub.
It has autocomplete, I knew that, when you do root (<tab><tab> it tells you all of the available drives
But if you do root <tab><tab> then it tries to mount the current filesystem and give you a file list.
And when I do THAT, I get error 17, can't mount the filesystem.
so why does everybody say grub can mount a raid 1 type filesystem when it can't.
It's also worth noting, that mount can't mount it either.
mount /dev/sdc1 /mnt/sdc1/
mount: unknown filesystem type 'linux_raid_member'
So what the fuck.
if its raid-1 there simply is a readable ext2 inside, which it should be able to mount from one disk...
I ran fsck on the raid device and the filesystem is fine.
It must be something about the superblock or something because it cant find the fs.
I just installed Fedora 11. Linux still has a long way to go. I put it on a separate 750 gb HD, so I could triple boot Fedora, Ubuntu, Windows. Each on it's on drive. Fedora of course does not recognize the other two drives. In the course of installing it, it somehow messed with those other drives and now I can't boot them from from the boot menu either. Only a small problem as I backed up all my data first. Still....since every distro handles things so differently I think it is holding things back.
I finally booted off my raid device.
Turns out the big hurdle was that while when the kernel is running and the machine is up hd0=sda hd1=sdb and hd2=sdc
But! Drop to a commandline in grub when it's BOOTING and ask the same question and you get
hd0=sda hd1=sdc and hd2=sdb
So this is why it couldn't find the /boot directory, it was looking at the wrong drive.
Once I fixed that all sorts of other problems came up mostly having to do with /dev/md1 not being there when the kernel started up. For whatever reason initrd's boot sequence didn't bring md1 up by the time the kernel was looking for it.
I fiddled with a bunch of things, and of course I'll never know which one did it, but I got it to boot, mount md1 and then load the kernel and voila here I am.
I just hope this disk copy has everything my old drive had....
It goes to sync up the new drive and....
the raid drive has disk errors.
Of course when I copied data to it, there was no problem because I didn't hit any of those sectors, but whenit's syncing, I guess it's doing a block by block copy so it finds all the errors.
and now I'm kinda screwed because it started syncing so my original unraided boot drive is gone.
Worst case I have a backup from yesterday, but now there's new data on the raid drive. So check me on this.
Do you think I can make a second raid device set, again 2 drives, one missing, copy all the files from the bad raid to the good raid, then remove the bad drive (actually take down the bad raid device first), remove the bad drive, put another good drive in then sync the new raid device to the new disk?
That should work right?
Unless somebody's got a simpler idea.
It was stupid of me to think that multiple raids wouldn't work, a) just bec cause and b) the demo showed 3 raid1 setups together.
Anyway, I make new raid, copied everything back ontot good drive, swapped bad drives added new blank drive to array, 8 minutes later it all synced up.
Whole thing should have taken 2 hours not 2 weeks, but I finally got it all working.
so i ran some timing tests...
Not sure how valid this is, but I figure 20gig is bigger than my 1gig of memory so it can't be buffering that much...
37543842+0 records in
37543842+0 records out
19222447104 bytes (19 GB) copied, 562.427 s, 34.2 MB/s
root@io:/sp# time dd if=/dev/md0 of=/dev/null
37543680+0 records in
37543680+0 records out
19222364160 bytes (19 GB) copied, 490.806 s, 39.2 MB/s
I actually ran it back and forth 3-4 times, this is the last cycle.
neato. I like technology.
I assume SATA doesn't have this problem.
I also noticed after going over my dmesg god knows how many times that linux can detect a 40 pin cable vs 80 pin, and only does ata33 with a 40 pin cable. So I put a 80 in on there (wasn't sure it would work since the port on the motherboard wasn't blue) but I guess that's just to tell you which is primary, because it worked, now they both run at ata133.
I would think I'd do better than that though. I was thinking closer to twice as fast, but hey, I'll take anything that's free.
For various reasons which are not very interesting right now, my home server is running Ubuntu. Normally I run CentOS on servers, which has a lovely file called /etc/modprobe.conf that specifies (among other things) which SCSI driver you want to use. I don't see that on Ubuntu (which I'm assuming is the same as Debian for this purpose). As far as I can tell, they seem to have made this type of thing completely automatic. Am I correct in guessing that it figures out The Right Thing To To (tm) on *every* boot, so when I move my disks over to a machine with a different SCSI chipset, it'll load the correct driver automatically?
I'm using modconf to configure the modules to be loaded in debian. nice curses frontend.
I need a program to convert some podcasts to WAV format from MP3. I want to burn a standard audio disc as the player in my car can not handle MP3s. Any suggestions? I'm running Fedora 11 if that matters.
mpg123 -w filename.wav filename.mp3
Well..that sounds interesting...for a few minutes. Everyone sounds like Alvin, Theodore, and Simon. I see no options for changing playback speed. I'll keep searching. But thanks anyways.
I'm using modconf to configure the modules to be loaded in debian. nice
Ok, I loaded it up, and it looks useful. So does that mean that if I want to change my boot device to use a different scsi driver, I need to run modconf, put the correct driver in, and then mkinitrd?