PEPPLER.ORG
Michael Peppler
Sybase Consulting
Menu
Home
Sybase on Linux
Install Guide for Sybase on Linux
General Sybase Resources
General Perl Resources
Freeware
Sybperl
Sybase::Simple
DBD::Sybase
BCP Tool
Bug Tracker
Mailing List Archive
Downloads Directory
FAQs
Sybase on Linux FAQ
Sybperl FAQ
Personal
Michael Peppler's resume

sybperl-l Archive

Up    Prev    Next    

From: "Cox, Mark" <Mark dot Cox at FMR dot COM>
Subject: out of memory -- CTlib
Date: Feb 8 2001 3:46PM

Any sugestions or help would be welcome.

I am using ct_lib to select large look-up tables from the data base for feed
processing.  I tend to assign all of the info in the data base into a hash
keyed on a specific value in the database and then read the file line by
line using the key as a quick lookup. What I am running into however is that
if I try to read in more than 100,000 records or so I get an 'Out of
Memory!' error.  Is there a more efficient way to read in a large number of
records into a hash table?  Any help or suggestions would be most welcome.

Thanks 
Mark

sub get_records {	
	my ($dbh) = @_; 
	my ($y, @record, $restype, $sql);

	$sql =	"SELECT  a bunch of rows " .
		"FROM  table "; 			 
	$dbh->ct_execute($sql);
	while ($dbh->ct_results($restype) == CS_SUCCEED) {
			next unless $dbh->ct_fetchable($restype);
		while (@record = $dbh->ct_fetch){
			if ( !$record_list{$record[3]}) {
				$record_list{$record[3]} = [@record];
###############  record[3] is the key value
			 } else { 
				print OUT
"=\"$record[3]\",=\"$record[0]\",=\"Duplicate\"";
			}		
			$y++;
			if (!($y % 10000) && ($y !=0)) {
				print "$y Records processed at " , `date`;
			}
		}
	}
	$dbh->ct_cancel(CS_CANCEL_ALL);
	print "read $y total records\n";
}