Michael Peppler
Sybase Consulting
Sybase on Linux
Install Guide for Sybase on Linux
General Sybase Resources
General Perl Resources
BCP Tool
Bug Tracker
Mailing List Archive
Downloads Directory
Sybase on Linux FAQ
Sybperl FAQ
Michael Peppler's resume

sybperl-l Archive

Up    Prev    Next    

From: "Cox, Mark" <Mark dot Cox at FMR dot COM>
Subject: RE: out of memory -- CTlib
Date: Feb 8 2001 10:01PM

Thanks for the help and suggestions.

--- The scalar(localtime) change made a surprising difference in processing

--- I ended up increasing the swap size on the Unix box and this solved the
memory problem for the   moment.

---  I am looking at writing it out to a file instead of keeping it all in
memory but I need to see if this is any better than just reading it directly
from the database for each record. The main advantage to keeping all of the
info in memory is the enormous decrease in processing time.  Typically 2hrs
becomes 10 min.  I will let you know how it goes.

Thanks again for the help


-----Original Message-----
From: Michael Peppler []
Sent: Thursday, February 08, 2001 11:15 AM
To: SybPerl Discussion List
Subject: Re: out of memory -- CTlib

Cox, Mark writes:
 > Any sugestions or help would be welcome.
 > I am using ct_lib to select large look-up tables from the data base for
 > processing.  I tend to assign all of the info in the data base into a
 > keyed on a specific value in the database and then read the file line by
 > line using the key as a quick lookup. What I am running into however is
 > if I try to read in more than 100,000 records or so I get an 'Out of
 > Memory!' error.  Is there a more efficient way to read in a large number
 > records into a hash table?  Any help or suggestions would be most

100,000 records in a hash table is quite a lot. Have you checked with
ps or top to see how much memory you are using? Do you have
limit/ulimit set?

I don't see any obvious problems with your code.
 > 			if (!($y % 10000) && ($y !=0)) {
 > 				print "$y Records processed at " , `date`;
 > 			}

You can use scalar(localtime) instead of `date` which will avoid a
fork()/exec() and should speed things up.

Michael Peppler - Data Migrations Inc. - -
International Sybase User Group -
Sybase on Linux mailing list: