Language:
switch to room list switch to menu My folders
Go to page: First ... 36 37 38 39 [40] 41 42 43 44 ... Last
[#] Thu Dec 10 2009 17:30:09 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


It has nothing to do with Linus in this case, really. A distro like Fedora should standardize on a single compatibility level of the kernel, and any updates should not break source compatibility with driver modules, at minimum, so that people like NVidia who have their "binary blob plus source-compiled glue layer" model, can have their stuff continue to work.

Sadly Fedora just doesn't do what they allege they're going to do, in this regard. RHEL does a much better job of it.

[#] Thu Dec 10 2009 18:34:26 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

in other news, looks like there's been an effort to remove Lance Davis
from any credits in CentOS:

--- CentOS-Base.repo 2009-08-21 17:24:26.000000000 -0400
+++ CentOS-Base.repo.rpmnew 2009-10-01 08:27:30.000000000 -0400
@@ -1,6 +1,5 @@
# CentOS-Base.repo
#
-# This file uses a new mirrorlist system developed by Lance Davis for
CentOS.
# The mirror system uses the connecting IP address of the client and
the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client. You should use this for CentOS
updates
@@ -17,7 +16,6 @@

[#] Thu Dec 10 2009 23:03:31 EST from kinetix @ ColabX

[Reply] [ReplyQuoted] [Headers] [Print]

Fedora having standardization?  Come now, really?

Fedora's the community-supported (read "beta testground") version of RedHat.  I presume you might remember when they forked it off, right?

I think there's plenty of distros out there that will provide a standardized compatibility level for a very long time.. Debian comes to mind, but so does CentOS, and even to a degree, Ubuntu, if you stick with a particular release through it's support period.

 



[#] Fri Dec 11 2009 10:58:38 EST from Peter Pulse @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Do you really think it is practical to allow the kernel developers to keep breaking device driver compatibility.. and expect each distro and each device driver writer to keep up with it, and for each user to depend upon all the device drivers they use to be updated in every update of their operating system?
It is not practical. If I have got some oddball piece of hardware, which I have gotten drivers for, perhaps two or three years ago, and it worked.. and I come upon a linux system where I want to use my hardware, I should be able to easily install that driver and start using the hardware. There should be a pretty good expectation of that working. I should not have to be concerned that when I got the driver, I was using Fedora, and now I am using Ubuntu.. or that I was using the driver on 6 months agos kernel not last weeks kernel.
Asking distros to freeze time is not the answer.

[#] Fri Dec 11 2009 10:59:17 EST from Peter Pulse @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

.. and asking distros to maintain a full set of every device driver a person is ever going to need is not the answer either.

[#] Fri Dec 11 2009 12:16:47 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

It's just not practical. I understand that they do this to try to encourage driver development to be maintained within the mainline kernel, but when devices simply don't work, it hinders Linux adoption.

Also, not every kernel module is a device driver. Wouldn't it be nice if VMware didn't break every time you upgraded the kernel? (If the answer is "VMware is closed source and is teh ebbil" then use some other example if you prefer; any open source program that has a kernel component but isn't maintained inside the mainline kernel is a valid example.)

They've got all sorts of elaborate workarounds like DKMS to compensate for what should be a very easy problem to solve. Keep the kernel API and ABI stable for at least a couple of years at a time. Linux is mature enough now that the driver model doesn't need frequent, major overhauls. What they're doing now is just putting ideology over pragmatism. I'm all for keeping things ideologically pure, but not at the expense of creating software that has real problems.

(I guess that makes me an "open source" person and not a "free software" person.)

[#] Fri Dec 11 2009 12:53:24 EST from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

which is why it's so much nicer writing back end software, and not the user interface layer stuff.

[#] Fri Dec 11 2009 18:23:32 EST from cellofellow @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Having an unstable, fluid API/ABI is a good thing. While I know it's an
extreme example, but look at Windows, which for a decade or better tried
to be as stable as possible. But stability is stagnation, and Apple and
Linux, who are not concerned with backwards compatibility and rigid
stability, were eating there breakfast. Unfortuneately for M$ they
painted themselves into a corner and when Vista made some changes,
everyone cried foul.

Stability is a red herring. Either it works or it doesn't. Only
stability that isn't a red herring is the sort that lets me keep my
server running for months at a time without a reboot. But things change,
and sometimes upgrades to implementation necessitate upgrades to
interfaces. Why restrict innovation? Why insist on a rigid interface?

-Josh

[#] Fri Dec 11 2009 18:38:03 EST from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Linux, who are not concerned with backwards compatibility and rigid
stability, were eating there breakfast. Unfortuneately for M$ they

you obviously dont live on the same planet we do.

[#] Fri Dec 11 2009 19:24:02 EST from Harbard @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I don't recall MS ever having a particularly stable product since DOS.

 

I am not a programmer.  Though I do fool around a bit, I am certainly not a system level programmer.  Is it really that hard to modularize things so an old driver keeps working even if if there is a minor change?  Especially something so basic as your video card?  Really I am interested in that question, it's not just idle bitching.



[#] Fri Dec 11 2009 20:32:42 EST from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Layers make things slow. You try and avoid layers the lower you go.
But yea you CAN do something about it if you have to.

[#] Fri Dec 11 2009 21:23:50 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I don't recall MS ever having a particularly stable product since
DOS.

Sure they did. Xenix was teh r0x0r.

http://uncensored.citadel.org/~ajc/xenix.html

[#] Sat Dec 12 2009 17:39:24 EST from cellofellow @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Still, why is an unstable API a good thing? Unless you're some wizard
who can forsee all possible future changes and design your API to
include those changes, changes will necesitate API changes too. If you
freeze an API for, say, two years, then all changes must conform to that
API, which will be, to say the least, limitting.

I think Linux and the kernel team prefer being allowed to make whatever
changes they please. Third-party drivers aren't really something the
kernel encourages anyway.

-Josh

[#] Sat Dec 12 2009 19:22:27 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Dec 11 2009 10:59am from Peter Pulse @uncnsrd
.. and asking distros to maintain a full set of every device driver a

person is ever going to need is not the answer either.

I'm not sure that's what's being asked for. It'd be nice if Fedora could simply do the CentOS thing just for the 12-18 months or so that they support a particular release: in other words, each new kernel update should contain bugfixes only...

[#] Sat Dec 12 2009 19:36:43 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

and sometimes upgrades to implementation necessitate upgrades to
interfaces. Why restrict innovation? Why insist on a rigid interface?

See now I've been doing development for long enough to see the problems with that idea. Developers like to change interfaces for no other reason than that the new one is slightly more aesthetically pleasing. Developers like to write long lists of "best practices" that are designed to solve some problem or other... and it certainly strokees the ego to solve those problems. But the best practice lists tend to grow and snowball out of control and then you get these mediocre developers who want to help solve problems but end up making changes that implement, say, some really low-priority item on the above-mentioned best practice list, and they do it while you're trying to solidify/QA your next major release, which is exactly the wrong time to make changes.

I might add that interface stability encourages testable software (or actually writing testable software encourages stable interfaces / creates a disincentive to change interfaces...) you really can NOT write meaningful tests for interfaces that are constantly in flux.

Low-level interfaces often need to be decomposed into the most primitive possible implementation; such that the implementation contains as few branches as possible. Think data-access interfaces that manipulate single records instead of arrays, for example. It's the most primitive possible interface. Then once you arrive at the logically most primitive interface, you DON'T CHANGE IT (unless somebody requests a new feature.)

[#] Sat Dec 12 2009 19:46:48 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


I might add that said developers often end up changing things in a way that is really a step SIDEWAYS and not really a step FORWARD. Like, something is not nece[2~ssarily better... just different.

[#] Sat Dec 12 2009 22:02:25 EST from LoanShark @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]


of course, you could just use debian...

http://loldebian.files.wordpress.com/2008/05/randomness.png

[#] Sun Dec 13 2009 01:37:26 EST from Harbard @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

Well, I can see the API changing significantly if I were installing a whole new version, but we're talking 2.6.30.9-99 to 2.6.30.9-102  that should be a very minor change that shouldn't break anything.  That's my beef.  I don't expect 100% backwards compatibility just a little bit easier round of updates and save all earth shattering changes for a major update...at least the 3rd significant digit in the version number.



[#] Sun Dec 13 2009 11:14:54 EST from Ford II @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

I'm not sure that's what's being asked for. It'd be nice if Fedora
could simply do the CentOS thing just for the 12-18 months or so that


But isn't that the point of fedora?
The redhat enterprise release is stable and unchanging, and fedora is the development line.
And centos is just a copy of RHEL.
And since centos is free, what's the problem.
If you want stable, you use centos, and if you want to be bleeding edge you get fedora, it's the equivalent of downloading and building the development kernel every day, no?

[#] Sun Dec 13 2009 12:23:51 EST from IGnatius T Foobar @ Uncensored

[Reply] [ReplyQuoted] [Headers] [Print]

 

developers who want to help solve problems but end up making changes that implement, say, some really low-priority item on the above-mentioned best practice list, and they do it while you're trying to solidify/QA your next major release, which is exactly the wrong time to make changes.

That's a difficult thing, when developers aren't communicating with each other enough, and one is working on QA while another is bringing up some brand new feature.  The new feature might be useful but it's the wrong time to commit it.  It just takes a little more communication to get it right.

That kind of thing has no business happening in a mature OS kernel.



Go to page: First ... 36 37 38 39 [40] 41 42 43 44 ... Last