Michael Peppler
Sybase Consulting
Sybase on Linux
Install Guide for Sybase on Linux
General Sybase Resources
General Perl Resources
BCP Tool
Bug Tracker
Mailing List Archive
Downloads Directory
Sybase on Linux FAQ
Sybperl FAQ
Michael Peppler's resume

sybperl-l Archive

Up    Prev    Next    

From: "Colm O' Flaherty" <colm_o_flaherty at hotmail dot com>
Subject: Re: avoiding the cross-product
Date: Feb 22 2000 9:13AM

If the second database is structured similarly to the Sybase database, u can 
either use a tool such as Embarcaderos "dbArtisan" or Sybases "Powerbuilder" 
to pipe the data from one to the other.  If the table structures are 
dissimilar, u could always bcp out the data from the Sybase database into 
files on the file system, and then manipulate these files into something u 
can import into the second database.

Something u need to watch out for is the level of denormalistion of the 
second database.. this accounts for the redundancy u were talking about: if 
u do a select from a single table, and u get repeated data in the results, 
the table is not fully denormalised.. (There are a few different levels of 
denormalisation.. invented by the Gods of relational database theory.. Boyce 
Codd and Friends)

If the level of denormalisation of the two databases is different, u will 
have to write select statements in the first database to get the data to the 
required granularity for the second database.  (Aggregating or joining as 

Hope this helps.


>To: SybPerl Discussion List 
>Subject: avoiding the cross-product
>Date: Tue, 22 Feb 2000 01:02:31 -0600 (CST)
>This is more of a basic sql question (I'm pretty new to sql). I need
>to do a 'dump' of a Sybase database for initializing an LDAP database.
>A person record in the Sybase database spans a number of tables, where
>some of the info is singly-occurring, and other is multiply occurring
>(e.g. the appointment table can have n rows for the same employee, and
>the 'general attribute' table can have m rows for that person.) If I do
>a massive join to get all that info, I get n*m rows for that person in
>the result set.
>Now since this dump might only happen once, a certain amount of
>inefficiency can be tolerated. But not n*m rows per person. So what is
>the right way to do this? Do I just get the singly occurring stuff in
>one query, and then get each of the multiply-occurring stuff in
>separate queries each? Or is there a different technique that accomplishes
>Michael A. Grady                  
>Senior Research Programmer        
>University of Illinois                      (217) 244-1253  phone
>Computing & Communications Services Office  (217) 244-7089  fax
>Rm. 1514 DCL, 1304 W. Springfield Ave       Urbana, IL 61801

Get Your Private, Free Email at