Index Home About Blog
From: Linus Torvalds <torvalds@linux-foundation.org>
Newsgroups: fa.linux.kernel
Subject: Re: [PATCH 10/11] readahead: dont do start-of-file readahead after
Date: Tue, 02 Feb 2010 20:28:39 UTC
Message-ID: <fa.nm/mUi8bOQQvNYdHph+oq8xlW/o@ifi.uio.no>

On Tue, 2 Feb 2010, david@lang.hm wrote:

> On Tue, 2 Feb 2010, Linus Torvalds wrote:
> >
> > Also, keep in mind that read-ahead is not always a win. It can be a huge
> > loss too. Which is why we have _heuristics_. They fundamentally cannot
> > catch every case, but what they aim for is to do a good job on average.
>
> as a note from the field, I just had an application that needed to be changed
> because it did excessive read-ahead. it turned a 2 min reporting run into a 20
> min reporting run because for this report the access was really random and the
> app forced large read-ahead.

Yeah. And the reason Wu did this patch is similar: something that _should_
have taken just quarter of a second took about 7 seconds, because
read-ahead triggered on this really slow device that only feeds about
15kB/s (yes, _kilo_byte, not megabyte).

You can always use POSIX_FADVISE_RANDOM to disable it, but it's seldom
something that people do. And there are real loads that have random
components to them without being _entirely_ random, so in an optimal world
we should just have heuristics that work well.

The problem is, it's often easier to test/debug the "good" cases, ie the
cases where we _want_ read-ahead to trigger. So that probably means that
we have a tendency to read-ahead too aggressively, because those cases are
the ones where people can most easily look at it and say "yeah, this
improves throughput of a 'dd bs=8192'".

So then when we find loads where read-ahead hurts, I think we need to take
_that_ case very seriously. Because otherwise our selection bias for
testing read-ahead will fail.

		Linus

Index Home About Blog