Somewhat OT: Re: [Mimedefang] graphdefang cores with largeamounts of data

Kevin A. McGrail kmcgrail at pccc.com
Wed Oct 15 18:13:35 EDT 2003


> Are most people just running a syslog server on another machine, or are
> they copying local syslogs to another machine periodically?

I don't run a syslog server.  I usually try and run one gigantic server with
more ram and processors to reduce administration.

> The load on my mailserver rarely goes above about 1.3 and usually hovers
> around .7 or .8 with only around 200M of memory usage.  The only time it
> every takes a serious processing/ram hit is when graphdefang is running.
>
> I don't see why graphdefang should be so resource intensive as to cause
> the need to dedicate another machine to it. I don't need to process my
> webserver logs on another machine and those files are much bigger than
> my mail server logs.  I'm curious as to if the fact that it is a perl
> script is causing the overhead.  Does anyone thins a rewrite in C would
> help the problem? Someone else mentioned using a relational database to
> store the data.  Is that really necessary?

My worries are:

The size of the maillog being parsed and the efficiency of
File::ReadBackwards (It MIGHT be perfect)

The size of the summaryDB coupled with the efficiency of Berkeley DB to
query this data

How many times the summary DB is accessed to produce the graphs

I made the statement about using a RDBMS because I've never had corruption
or core dumps or a server brought to it's knees while parsing data with
MySQL.

Regards,
KAM



More information about the MIMEDefang mailing list