Comments on 2018-11-06 Memory Mystery

How is 388 requests in 2 minutes a problem? That’s no more than 4 requests per second - shouldn’t the webserver be able to handle that?

Is there some kind of memory leak or pathological case that the chinese crawlers happen to hit? The sudden spike in memory use is not just because of 4 hits/s, is it?

Can you reproduce the memory use, maybe on a local machine replaying the requests from the log around the time of the problem (or just bombarding it with requests)?

Just random questions from my armchair :-)

Adam 2018-11-09 21:46 UTC


Heh. :) The main problem is that we’re not talking about static HTML pages. If somebody does a search on this wiki, there’s a grep over 7000 pages, and then the wiki code looks at the matches, renders the pages, highlights the phrases, and so on. That means the process takes a while. If it takes 10 seconds, and I get 4 searches per second (I doubt this was the explanation, just trying to explain how the server is prone to accidental denial of service attacks), then I have 40 processes grepping through 7000 files per second. And then it dies, because this is a tiny 2G virtual machine I’m somewhere, managed by myself and nobody else. At least that’s what I suspect.

That’s why my /robots.txt says Crawl-delay: 20.

Replaying the requests from the log is a good idea. I should do it the next time this happens. And I should make sure to compare the memory used with the number of processes running at the time.

– Alex Schroeder 2018-11-10 00:30 UTC


More armchair comments: Could you exclude the search URL in robots.txt?

A more laborous solution, if searching turns out to be the problem, is to index the pages using, say, Xapian (the Perl-bindings work nicely).

Anyway, good luck in finding the cause!

Adam 2018-11-10 13:49 UTC


As they are already ignoring the crawl delay I think it is safe to say that they either don’t know or don’t care. My robot.txt Contains the line Disallow: /wiki? which should take care of anything that is not a page view. But it’s being ignored.

A long time ago I used a Perl module that indexed the text and allowed scoring of results - but eventually it turned out that brute forcing search using grep was actually faster. But that was years ago and on a different setup. Perhaps I should revisit this.

These days I am more interested in blocking misbehaving agents. The old solution limits visits from the same IP number (surge protection, and I have fail2ban watching my log files, to watch over all the other services (cgit and the like). But if the request come from all over a subnet, these defenses don’t work, unfortunately.

– Alex Schroeder 2018-11-10 18:09 UTC


Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. Note: in order to facilitate peer review and fight vandalism, we will store your IP number for a number of days. See Privacy Policy for more information. See Info for text formatting rules. You can edit this page if you need to fix typos. You can subscribe to updates by email without leaving a comment.

To save this page you must answer this question:

Please say HELLO.