Index Home About Blog
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [ck] Re: Linus 2.6.23-rc1
Date: Sat, 28 Jul 2007 18:06:11 UTC
Message-ID: <fa.8qO+SIspEtOqkaaZnwzyvF5huZ4@ifi.uio.no>

On Sat, 28 Jul 2007, Jan Engelhardt wrote:
>
> You cannot please everybody in the scheduler question, that is clear,
> then why not offer dedicated scheduling alternatives (plugsched comes to mind)
> and let them choose what pleases them most, and handles their workload best?

This is one approach, but it's actually one that I personally think is
often the worst possible choice.

Why? Because it ends up meaning that you never get the cross-pollination
from different approaches (they stay separate "modes"), and it's also
usually really bad for users in that it forces the user to make some
particular choice that the user is usually not even aware of.

So I personally think that it's much better to find a setup that works
"well enough" for people, without having modal behaviour. People complain
and gripe now, but what people seem to be missing is that it's a journey,
not an end-of-the-line destination. We haven't had a single release kernel
with the new scheduler yet, so the only people who have tried it are
either

 (a) interested in schedulers in the first place (which I think is *not* a
     good subset, because they have very specific expectations of what is
     right and what is wrong, and they come into the whole thing with that
     mental baggage)

 (b) people who test -rc1 kernels (I love you guys, but sadly, you're not
     nearly as common as I'd like ;)

so the fact is, we'll find out more information about where CFS falls
down, and where it does well,  and we'll be able to *fix* it and tweak it.

In contrast, if you go for a modal approach, you tend to always fixate
those two modes forever, and you'll never get something that works well:
people have to switch modes when they switch workloads.

[ This, btw, has nothing to do with schedulers per se. We have had these
  exact same issues in the memory management too - which is a lot more
  complex than scheduling, btw. The whole page replacement algorithm is
  something where you could easily have "specialized" algorithms in order
  to work really well under certain loads, but exactly as with scheduling,
  I will argue that it's a lot better to be "good across a wide swath of
  loads" than to try to be "perfect in one particular modal setup". ]

This is also, btw, why I think that people who argue for splitting desktop
kernels from server kernels are total morons, and only show that they
don't know what the hell they are talking about.

The fact is, the work we've done on server loads has improved the desktop
experience _immensely_, with all the scalability work (or the work on
large memory configurations, etc etc) that went on there, and that used to
be totally irrelevant for the desktop.

And btw, the same is very much true in reverse: a lot of the stuff that
was done for desktop reasons (hotplug etc) has been a _huge_ boon for the
server side, and while there were certainly issues that had to be resolved
(the sysfs stuff so central to the hotplug model used tons of memory when
you had ten thousand disks, and server people were sometimes really
unhappy), a lot of the big improvements actually happen because somethng
totally _unrelated_ needed them, and then it just turns out that it's good
for the desktop too, even if it started out as a server thing or vice
versa.

This is why the whole "modal" mindset is stupid. It basically freezes a
choice that shouldn't be frozen. It sets up an artificial barrier between
two kinds of uses (whether they be about "server" vs "desktop" or "3D
gaming" vs "audio processing", or anything else), and that frozen choice
actually ends up being a barrier to development in the long run.

So "modal" things are good for fixing behaviour in the short run. But they
are a total disaster in the long run, and even in the short run they tend
to have problems (simply because there will be cases that straddle the
line, and show some of _both_ issues, and now *neither* mode is the right
one)

			Linus


From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [ck] Re: Linus 2.6.23-rc1
Date: Sat, 28 Jul 2007 22:19:17 UTC
Message-ID: <fa.QyNeWnMuuyYfqPAVODJxm7Du0ow@ifi.uio.no>

On Sat, 28 Jul 2007, Bill Huey wrote:
>
> My argument is that schedule development is open ended. Although having
> a central scheduler to hack is a a good thing, it shouldn't lock out or
> supress development from other groups that might be trying to solve the
> problem in unique ways.

I don't think anything was suppressed here.

You seem to say that more modular code would have helped make for a nicer
way to do schedulers, but if so, where were those patches to do that?
Con's patches didn't do that either. They just replaced the code.

In fact, Ingo's patches _do_ add some modularity, and might make it easier
to replace the scheduler. So it would seem that you would argue for CFS,
not against it?

> I think that's kind of a bogus assumption from the very get go. Scheduling
> in Linux is one of the most unevolved systems in the kernel that still
> could go through a large transformation and get big gains like what
> we've had over the last few months. This evident with both schedulers,
> both do well and it's a good thing overall the CFS is going in.
>
> Now, the way it happened is completely screwed up in so many ways that I
> don't see how folks can miss it. This is not just Ingo versus Con, this
> is the current Linux community and how it makes decision from the top down
> and the current cultural attitude towards developers doing things that
> are:

I don't think so.

I think you're barking up the totally wrong tree here.

I think that what happened was very simple: somebody showed that we did
badly and had benchmarks to show for it, and that in turn resulted in a
huge spurt of coding from the people involved.

The fact that you think this is "broken" is interesting. I can point to a
very real example of where this also happened, and where I bet you don't
think the process was "broken".

Do you remember the mindcraft study?

Exact same thing. Somebody came in, and showed that Linux did really badly
on some benchmark, and that an alternate approach was much better.

What happened? A huge spurt of development in a pretty short timeframe,
that totally _obliterated_ the mindcraft results.

It could have happened independently, but the fact is, it didn't. These
kinds of events where somebody shows (with real numbers and code) that
things can be done better really *are* a good way to do development, and
it's how development generally ends up happening. It's hugely
motivational, both because competition is motivational in itself, but also
because somebody shows that things can be done so much better opens
peoples eyes to it.

And if you think the scheduler situation is different, why? Was it just
because the mindcraft study compared against Windows NT, not another
version of Linux patches?

The thing is, development is slow and gradual, but at the same time, it
happens in spurts (btw, if you have ever studied evolution, you'll find
the exact same thing: evolution is slow and gradual, but it also happens
in sudden "spurts" where you have relatively much bigger changes happening
because you get into a feedback loop).

Another comparison to evolution: most of the competitive pressure actually
comes from the _same_ species, not from outside. It's not so much rabbits
competing against foxes (although that happens too), quite a lot of it is
rabbits competing against other rabbits!

			Linus


Index Home About Blog