Language:
switch to room list switch to menu My folders
Go to page: First ... 52 53 54 55 [56] 57 58 59 60 ... Last
[#] Sat Dec 18 2010 21:48:00 EST from Ford II @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

I guess I was a bit vague in that message. The second identical drive used to be part of a 2 piece raid zero array when I thougt it would be fun to run raid 0.
So when it booted off that drive it came up in broken raid 0 mode and ran happily waiting for me to replace the first broken drive.

I have long since thought better of the idea and blew away the first drive and installed OS from scratch, but left second drive alone thus (as never occurred tome) it was still a viable functioning half of a raid 0 setup.

So I had a better idea. I invented raid minus 1.
Rather than have the OS manage live mirroring I decided to remove everything off the second drive and write a script to dd drive 1 to drive 2 once a week.
It's like raid 0 but not live. I have daily incremental backups to cover the span between dd script runs.

What I realized is that it's not just my data that's important, but the OS setup as well. I get such a sick feeling in my stomach when I realize I have no working machine and I have to do nothing but install configure install configure fight with vmware (which is really the worst of it) just so I can work from home and get my mail and such.
So my mirroring the drive once a week, if the first drive fails, I just change my bios and voila working machine, no installing of anything.

"BUT!" you say, "if you run dd on an active partition, it's not in any real state that should be backed up. It's a useless backup."
"NAY!" I retort.

I read an article a few years ago suggesting that you don't shut down your machine, you just power it off. Why wait for it to fuck around flushing buffers and what not, when really, the OS has so many layers of transaction log and whatnot, it generally can recover pretty well most of the time (esp ext 3 and ext 4) so really, if my live partition dd backup isn't usable it's a failing of ext4 not my backup mechanism.
Of course I haven't tried this out yet....

[#] Sun Dec 19 2010 08:31:10 EST from dothebart @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

you should use LVM and snapshots to get that steady state ;-)

but in general a drdb via loopback probably would be the same as online.

But I realy like your idea of having the two drives wearing of in a different time frame; since one of the biggest fails of any raid is, that the drives age at the same pace, so you end up



[#] Sun Dec 19 2010 15:28:07 EST from Ford II @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

I was thinking of powering down the second drive (if i can) except for the once a week do do the dd.
By the way, did I mention it takes EIGHT HOURS to dd 650gig.
I gotta think there's a batch size setting I can set somewhere.

[#] Sun Dec 19 2010 18:43:39 EST from dothebart @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

 

So Dez 19 2010 15:28:07 EST von Ford II @ Uncensored Betreff: Re:
I was thinking of powering down the second drive (if i can) except for the once a week do do the dd.
By the way, did I mention it takes EIGHT HOURS to dd 650gig.
I gotta think there's a batch size setting I can set somewhere.

basicaly i'm not a friend of dd here. if your filesystem is inconsistent or broken, you will copy the brokenness.

also reading the full disk stresses your system and the disk.

i'd rather prefer sort of a monthly backup with tar over the full range, and incremental ones on a daily basis.

ok, tar has one real big disadvantage: you can't mount it.

but its probably also a good (and faster - less stressing ) solution.

ok, gzipping effectiveness and incremental tar effectiveness affects the amount and the quality of my sugestion...

 

otoh, i'd rather do it via tar | nc to another box. if you've got a problem with your memory, or your disk controller, that backup also will break. if you have a second box around,

I had a situation where I created an ext2 filesystem, copied everything over with tar |tar, put other disks into the hardware raid, bootet a newer kernel, and the backup was totaly broken, the filesystem totaly fucked up.

since then I prefer to have tcp in the line of my backups.



[#] Sun Dec 19 2010 20:20:53 EST from IGnatius T Foobar @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]


"RAID Minus 1." Now I've heard everything. :)

Ok, well to get the kernel to stop recognizing a RAID volume, you need to change its partition type back to regular Linux, and you also have to clear the md signature on the partition.

I don't know how to do that. Sorry.

[#] Mon Dec 20 2010 14:24:43 EST from Ford II @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

actually that's a good poiunt, the key is to make the partition not a raid linux volume. I should have thought of that.

Somebody said dd will copy broken filesystem. True, but tar won't be able to tar off a broken filesystem either if it's broken enough.
The idea is hopefully to notice whatever problem it is before I do a mirror copy so I still have a good image. I guess that would require that I reboot the machin every week before the backup to make sure the machine's in a good state before I back it up.
why oh why is this so hard.


I guess next time the system gets messed up, I'll have to come up with something simpler to recover with.


[#] Mon Dec 20 2010 15:02:35 EST from Spell Binder @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Rebooting may not be necessary. I would think going into single-user mode would shut down enough processes to allow you to make a clean backup. Then when the backup is done, transition back to multi-user state.

[#] Mon Dec 20 2010 15:32:39 EST from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

oh, I wasn't thinking about a clean filesystem state so much as having a bootable machine, as in: grub is configured correctly, the kernel loads, all the device drivers fire up and find their respective devices, filesystems are mounted and so on. I want to be sure all THAT stuff works before I make a backup.

[#] Mon Dec 20 2010 18:54:35 EST from dothebart @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

I would boot of an SD-Card, a thumb drive  or something like that.

separate your system from the payload.

the tar will give you a snapshot of your actual data. More precise than the dd.

unless you have several disks to dd to.



[#] Mon Dec 20 2010 22:38:16 EST from IGnatius T Foobar @ Dog Pound BBS II

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

why oh why is this so hard.

I suspect it's because you're trying to outsmart the system.

[#] Tue Dec 21 2010 15:02:55 EST from Ford II @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

I suspect it's because you're trying to outsmart the system.

the system needs outsmarting.

The problem with separating the system from the payload is that the system IS the payload.
When you install something even from yum or apt or synaptic or whatever, does it store this install and metadata in a separate filesystem? No. It reconfigures /etc and /usr/share and all that, and THAT'S one of the many things I don't want to have to do/setup all over again if the machine dies.
Every time I put a new applet on my gnome applet bar, every time I close a shell and my history is stored in .bash_history... THAT'S the stuff I want to back up.

[#] Tue Dec 21 2010 19:03:38 EST from dothebart @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

ah, ok, you're using that inferor linux where you have to configure everything by hand, well.

me does 'dpkg --get-selections' and it gets me on 98% of where I was in advance.

I'd rather do gentoo, where its waste your time on compiling instead of waste your time on configuring ;-P

</rant>



[#] Wed Dec 22 2010 23:45:49 EST from Ford II @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

I actually am very keen on the gentoo philosophy because compiling to your hardware gets you optimizations that the stock kernel does not so it's basically free speed, or at least not losing speed for no reason.
But I am too old to fight with gentoo and I go with ubuntu because they make it as easy as possible.

[#] Tue Dec 28 2010 07:48:18 EST from IGnatius T Foobar @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

I guess I'm even stodgier than you, then, because I've gotten to the point where I try really hard not to install anything that isn't in the repository.

When you do that, it's pretty easy to never have to do a reinstall; you just keep rolling forward with the in-place updates.

If you insist on wiping it clean from time to time, the best available practice for "keeping your stuff" is to install it all in /usr/local and then when you reinstall or move to a new machine, you bring along your backups of /usr/local and /home.

Also, running a network of multiple machines where /usr/local and /home are shared with NFS will really hone your skills in this area.

[#] Wed Dec 29 2010 13:12:07 EST from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

But there's still lots of stuff that these install throw in /etc. No simple way around that that I can think of.

I don't wipe my machine exactly. I mean, I do, but I end of copying back /home and all my /usr/local/bin stuff and when things don't run anymore, I have to hunt down the packacge I'm missing.
Eventually I get to a place where I've been able to do everything I want to for a while then throw away the backup of the old machine.
I have a mount called /sp that I try and throw all my data in where possible so I only really have to back that up and move that from machine to machine to get most of what I had.
I recently finished my banishment of vmware. It took 2 solid days, but I've got most everything I had before.
I can vpn play pandora and rip dvds and whatnot and so far virtualbox is keeping up.
So hopefully I'll never have to suffer that dreaded "oh no vmware won't compile its kernel modules again" stomach drop out things.

[#] Fri Dec 31 2010 17:27:43 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

And now you'll just get the dreaded "oh no virtualbox won't compile its kernel modules again" stomach drop out things instead.

Or did you manage to pull it all together using only drivers that are in the repository?

[#] Fri Dec 31 2010 23:39:36 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

And now you'll just get the dreaded "oh no virtualbox won't compile its

kernel modules again" stomach drop out things instead.

Doesn't happen on Windows.

[#] Sat Jan 01 2011 01:08:58 EST from ax25 @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

Or KVM :-) (recent kernels)



[#] Sat Jan 01 2011 01:26:44 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Doesn't happen on ESXi either. That's the way to go.

[#] Sat Jan 01 2011 01:46:24 EST from ax25 @ Uncensored

Subject: Re:

[Reply] [ReplyQuoted] [Headers] [Print]

Binary blobs scare me.  I have a few too many binary blob programs that no longer run under current Glibc releases.  If KVM gets replaced at some point with someting better, I would guess I would have a way to continue using my ancient vm images that I keep around to punish myself and my free time.  I can't say that about the more recently abandoned VMWare Server product - Firefox breaks the browser plugin and I end up having to use a hack for the ESXi management tool just to see the hosts I have deployed.  I would guess the blob would quit working at some point as well with a Glibc update.  At least the qemu convert tools will give me some hope.

Probably being to harsh on VMwer, but KVM fits the bill for 99.99% of what I do (i.e. no games or graphics intensive stuff, but they are working on that via seperation from the vnc stuff).



Go to page: First ... 52 53 54 55 [56] 57 58 59 60 ... Last