Language:
switch to room list switch to menu My folders
Go to page: First ... 20 21 22 23 [24] 25 26 27 28 ... Last
[#] Tue May 30 2006 17:02:08 EDT from "hjalfi" <hjalfi@uncensored> to Kinetix <Kinetix@uncensored.citadel.org>

Subject: Re: (no subject)

[Reply] [ReplyQuoted] [Headers] [Print]

Don't know about UFS, I'm afraid (I only ever ran OpenBSD, which recommends FFS).

I hear there's a Google Summer of Code project to add journal support to FFS, though.



[#] Tue Jun 06 2006 10:05:13 EDT from the8088er @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Du da du da deeee
I been downloading BSD
Du da du da deee
But the mirror is slow
Du da du da deee
But the install's done started
Du da du da deee
So I got nowhere to go

I've got the slowed dowm mirror blues....
It feels like twentyfour hundred baud....
I just can't get back off of this slowed down

International Mirror...

Du da du da deee
Extracting into root directory
Du da du da deee
Is only 50 percent done
Du da du da deee
But at 29.1 K per Second
It may be here till I'm 21

I've got the slowed down mirror blues...
It feels like 2400 baud...
I just can't get back off of this slowed down
International mirror...

Yes, at 9 AM when I had no sleep teh night before, I get insane.

[#] Tue Jun 06 2006 23:21:01 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

/dev/harmonica: I/O ERROR

[#] Wed Jun 07 2006 00:29:38 EDT from IO ERROR @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Hey, don't blame me.

[#] Wed Jun 07 2006 05:39:22 EDT from the8088er @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

LOL

[#] Thu Jul 06 2006 18:57:49 EDT from the8088er @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I want to set up a FreeBSD system to get all my email from a .Mac account by IMAP, run it through spamassasin, and then host an IMAP server that supports the IDLE command with two mailboxes. One for junk and one for my Inbox without the junk. How can I do this with minimal effort but still a focus on security?

[#] Fri Jul 07 2006 16:17:46 EDT from the8088er @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

/me kicks the unixers of uncensoredland....

[#] Fri Jul 07 2006 16:39:20 EDT from Grey Elf @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


RTFM Moron.

[#] Fri Jul 07 2006 16:42:26 EDT from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Hire somebody?
Don't use IMAP?
Don't expect great things of the IDLE command?
Security through obscurity?

[#] Fri Jul 07 2006 18:29:58 EDT from the8088er @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Was hoping for a little more... assistive comments.

[#] Fri Jul 07 2006 18:40:29 EDT from the8088er @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I have configured Dovecot and my Treo is downloading messages from it fine. My two problems I am having is that I'm using getmail to download themessages into the mail folder. Getmail will not stay connected. It downloads new messages and disconnects and there doesn't seem to be a way to stop it from disconnecting. The second thing is that Dovecot doesn't pick up on new messages. It just sits there when new mail is dropped into the folders and I have to disconnect and reconnect to make it see new messages.

Any ideas?

[#] Fri Jul 07 2006 23:22:49 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

That sounds like a client problem. Dovecot will tell the client about new messages if the client asks. Did you try it with a different IMAP server?

[#] Mon Jul 24 2006 12:41:38 EDT from Hue Jr. @ Anansi-Del

[Reply] [ReplyQuoted] [Headers] [Print]


Try security through non-existence.

[#] Tue Oct 17 2006 18:21:07 EDT from rss @

Subject: Introducing Project Blackbox

[Reply] [ReplyQuoted] [Headers] [Print]

As I've been saying for a while, our customers - more specifically, a segment* of our customers - face a diversity of tough challenges. What does the CIO in midtown Manhattan do when she runs out of roof space or power? How does an aid agency deliver basic connectivity to 5,000 relief workers in a tsunami stricken metropolis? What does an oil company do when they want to move high performance analytics onto an offshore platform or supertanker? Or a large web services company do when they want to cookie cutter their infrastructure next to a hyrdroelectric plant for cheap power - within weeks, not years?

None of these are easy problems to solve - especially one computer at a time. They're more commonplace than you'd think across the globe. And now you know the motivation behind our asking a simple question, "what would the perfect datacenter look like?"

Improving upon its father, the traditional datacenter, it'd have to be more space and power efficient. Very high performance, and designed for machines, not people with plush offices. It'd have to be available within weeks, not years. And portable, to allow customers to deploy it anywhere - in a disaster area, or next to a hydro generator.

But let's start with the most basic question. How big would it be?

In the world of vertically scaled, or symmetric multi-processing systems, pools of CPU's share access to a common set of memory. But the size of a given system has a physical and logical limitation: it can be no bigger than the private network used to connect all the disparate internal elements.

But the future of the web is clearly moving toward horizontal or grid computing. In a grid, a conventional network is used to connect collections of smaller*, general purpose elements (like Sun's Niagara or Galaxy systems). The question of "what's the biggest grid?" has no obvious answer - they can be as big as you want. Just as at TACC, where they're building the largest supercomputer on the planet out of general purpose elements.

So a while back, we asked a few talented systems engineers a simple question: is there an optimum size for a horizontally scaled system? Interestingly enough, the answer wasn't rooted in the Solaris scheduler or a PhD thesis. It was rooted in the environmental realities faced by the customers I cite in the second paragraph. And perhaps more interestingly, in your local shipyard.

Shipyard?

The biggest thing we could build would ultimately be the biggest thing we could transport around the world - which turned out to be a standardized shipping container. Why? Because the world's transportation infrastructure has been optimized for doing exactly this - moving packets containers on rails, roads and at sea. Sure, we could move things that were bigger (see image), but that wasn't exactly a general purpose system.

So the question at hand became, "how big a computer can you build inside a shipping container?" And that's where the systems engineering started.

First, why are servers oriented in racks and cooled by fans front to back? To maximize convenience for humans needing to interact with systems. But if you want to run a "fail in place" datacenter, human interaction is the last thing you want. So we turned the rack 90 degrees, and created a vastly more efficient airflow across multiple racks. And why not partially cool with water in addition to air - if you burn your hand, do you wave it in the air, or dunk it in a bowl of ice water? The latter, water's a vastly more efficient chiller.

A non-trivial portion of an average datacenter's operating expense is the power required to chill arbitrarily spaced, very hot computing platforms - vector the air, augment with a water chiller, and cooling expense plummets. As does your impact on the environment. Did I mention the eco in eco-responsible stands for economics? For many companies, power is second only to payroll in datacenter expenses. (Yes, the power bill is that big.)

And that's how we started to go after power efficiency.

Second, if you can generate power for less than the power company charges you, why not do so - put a generator next to the chiller in a sister container, and you've got access to nearly limitless cheap power. (Heck, you could run it on bio-diesel.)

And if power rates or workload requirements change and you want to relocate your container - good news, the world's transportation infrastructure is at your disposal. Trains, trucks, ships, even heavy lift helicopters. You can place them on offshore oil rigs. In disaster areas. In remote locations without infrastructure. To wherever they're most needed.

Finally, in most datacenters I vist, I see more floor tiles than computers. Why? Because operators run out of power capacity long before they fill up their datacenters - leading them to waste a tremendous amount of very expensive real estate with racks spaced far apart. In a container, we go in the opposite direction - with plenty of power and chilling, we jam systems to a multiple of the density level and really scrimp on space. And it can run anywhere, in the basement, the parking garage, or on a rooftop. Where utilities, not people, belong.

With a ton of progress behind us, and enough customer interaction to know we're on to something, that's why we've unveiled our alpha unit, and gone public with the direction. We've done a lot of detail work, as well, working to integrate the container's security systems into enterprise security systems. It knows where it is via GPS (you can locate them via Google Maps, if that's your bent). Sensors know if the container's been opened or moved. We've even done basic drop tests (one, accidentally) to deal with transportation hazards (the racks inside can handle an 8g impact!). And we've explored camouflage options, too (you really don't want a big Sun logo screaming "steal me, I'm full of RAM!" on customer units).

Every customer we've disclosed has had a different set of concerns or challenges. None in my mind are insurmountable. But we don't have all the answers, of course, that's why we'll be working with key partners and integrators (one customer wanted the container to detonate if it was breached - er... perfectly doable, just not something Sun would do).

At a top level, we know there is no one hammer for all nails.

But in this instance, there might be one blackbox for all of network computing.

Specs and details to come - and in the interim, here are some great photos and usage scenarios (I especially like the Mars Rover companion - that was Greg's idea).

____________________________________

* more on this later.

http://blogs.sun.com/jonathan/entry/a_logical_end_point

[#] Wed Oct 18 2006 11:16:10 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

(The above post was copied from Mr. Schwartz's blog, which we carry here on Uncensored in a hidden room called "Jonathan Schwartz". We may not carry it much longer, though, because it's not all that interesting anymore.)

Anyway, a few of us were talking about "Project Blackbox" yesterday.

It has a very high coolness factor. Unfortunately, it doesn't fare very well in terms of practical usefulness.

Ask yourself: when you build a data center, what kind of data center do you build?

Answer: you build an EMPTY data center. You don't build it jam-packed with a bunch of preconfigured equipment from Sun or any other vendor. You leave those racks and floor space open, and then you bring in equipment that serves your current needs. And you leave space unused for your future needs.

Now imagine you're a typical Sun customer, and your data center is being used for, say, financial services applications. Can you imagine having to explain to an auditor why your data center is in a shipping container plopped down in the parking lot, or on the roof, or in a garage?

[#] Wed Oct 18 2006 15:03:39 EDT from Freakdog @ Dog Pound BBS II

[Reply] [ReplyQuoted] [Headers] [Print]

Newer IBM AIX hardware comes with a capacity on demand option...it comes prepopulated with as much RAM and CPU as it will hold, but you don't actually pay for the CPU and RAM you're not using, until yopay to enable it.

[#] Wed Oct 18 2006 19:38:14 EDT from IO ERROR @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Now imagine you're a typical Sun customer, and your data center is
being used for, say, financial services applications. Can you imagine

having to explain to an auditor why your data center is in a shipping

container plopped down in the parking lot, or on the roof, or in a
garage?

Because the corporate headquarters just got wiped out by a hurricane?

[#] Wed Oct 18 2006 20:57:52 EDT from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Serves you right for putting servers in Louisiana. They ought to be in Hawthorne, NY.

[#] Thu Oct 19 2006 02:12:09 EDT from IO ERROR @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I'll consider it, just as soon as I get a quote for 1U or 2U of space and a 100Mbit port (not that I'll use much bandwidth, but I do invite attention from digg and /. from time to time)...

[#] Thu Oct 19 2006 13:37:33 EDT from Magus @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

That way he can worry about tornadoes rather than hurricanes?

Go to page: First ... 20 21 22 23 [24] 25 26 27 28 ... Last