Sybperl 2.0: Using the Sybase::CTlib module

Note: This article was first published in the Spring 1996 (Volume 5, Number 1) issue of the ISUG Technical Journal (formerly the Sybase Server)

Sybperl is the generic name for the Open Client extensions to Perl. Perl stands for Practical Extraction and Report Language. It is a shell-like scripting language, that borrows it's syntax from a large number of sources, including C, the Bourne shell, the C shell, sed and awk. Because of it's powerfull syntax and unified features, perl has become the language of choice for Unix system administrators, and it has also proved very popular for writing CGI scripts for WWW applications. Sybperl extends perl by adding a fairly large subset of the Open Client API, making it possible to write very powerfull scripts in a very short time. Sybperl was first released in 1991, and has gained a fairly wide acceptance among Sybase DBAs and programers.

Both perl and sybperl are available in source form, and are distributed under the GNU General Public License, or under the Perl Artistic License.

In early 1995, version 5.0 of perl was released, with many new features and enhacements. In particular, Perl 5.0 adds a new 'Object Oriented' paradigm to perl programing, with the ability of creating object hierchies via perl 'modules'. Sybperl version 2.0, released last November, takes advantage of these new features, and extends the range of Open Client API calls that are supported to include a subset of the Client Library (aka CTlib) API (Sybase's new web.sql product uses a version of this API.) This article will introduce Perl's interpretation of the CTlib API. I will assume that the reader has some familiarity both with Perl and with the Open Client products. Please see the end of the article for references on how to obtain perl and sybperl and for Perl documentation.

The prototypical Sybase::CTlib program looks like this:


# Load the Sybase::CTlib module:
use Sybase::CTlib;

# Allocate a new Database 'handle':
$dbh = new Sybase::CTlib 'user', 'password', 'SERVER';

# Send the query to th server:
$dbh->ct_execute("select uid, name from sysusers");

# Retrieve the result sets 
while($dbh->ct_results($restype) == CS_SUCCEED) {

    # Skip non-fetchable results:
    next unless $dbh->ct_fetchable($restype);

    # Retrieve actual data rows:
    while(($uid, $name) = $dbh->ct_fetch) {
        print "$uid - $name\n";
# Output:
0 public
1 dbo
3 mpeppler

You will notice that with Perl 5.0, a special 'sybperl' binary is no longer necessary on most Unix platforms: Perl is capable of dynamically loading the libraries that are necessary at run time. Compared to a 'C' program doing the same thing, this Perl script is an order of magnitude simpler, while providing the same level of control and flexibility (including error handler callbacks). The object oriented features of Perl provide a lot of additional help, including automatic 'destruction' of 'objects' when they go out of scope. Thus it is usually not necessary to explicitly close a connection to the database.

Some Explanations:

The new call allocates and initializes a new Sybase::CTlib object. This object (referred to as the Database handle, or DBhandle) provides the central access point to the CTlib calls (or methods in OO parlance) that are used to interact with the database. The $dbh variable returned by new is a fully configured connection (all the initialization procedures have been done.)

The ct_execute() method takes a SQL statement and sends it to the server using the $dbh connection. The SQL statement can be a simple SELECT, or any combination of statements (ct_execute() is a combination of ct_command() and ct_send().) The ct_results() call retrieves each of the result sets that are generated by this query, and any data rows are fetched with ct_fetch(). Notice that ct_fetch() directly returns an array of values, one for each column in the result row - variable binding and column description is done automatically.

ct_fetch() can also return an associative array keyed on the column name. This feature is enabled by passing a TRUE flag to ct_fetch:

	$dbh->ct_execute("select uid, name from...");
	%dat = $dbh->ct_fetch(CS_TRUE);
	if($dat{'uid'} == 10) {

This technique makes the code less sensitive to changes in the SQL submitted to the server (you don't have to worry if the uid column is column 1 or column 2 in the result set), and makes it easier to read, too.

One step queries:

An even simpler access method exists with the 'one step' ct_sql() method:
	@rows = $dbh->ct_sql("select uid, name from...");
The @rows variable holds an array of 'references' to rows returned by the ct_fetch() call. Perl 'references' are similar to 'C' pointers: they offer an indirect access to a particular data item. In this case, each element of the @rows array 'points' to an array returned by ct_fetch(). To access each returned row, you could write:
	foreach $row_ref (@rows) {
		($uid, $name) = ($$row_ref[0], $$row_ref[1]);
Note the double $ sign when de-referencing the $row_ref variable (more on references later).

This one step call is very usefull for writing simple, clear code, but has one major drawback: the entire query result set is loaded into memory, which makes it unusable for processing large result sets.

Dates and Money:

Something that has often been a problem for me has been the handling of dates and other special data types (such as Money and Numeric). We use these data types in our databases for a reason, and it is annoying (to say the least!) to see their special properties disappear due to a conversion to CHAR or to FLOAT. So the CTlib module has the capability to preserve the full precision and attributes of these special data types. This is done by creating new classes of objects with specific behaviours. Let's say we have retrieved a date from Sybase using ct_fetch():
($date) = $dbh->ct_fetch;
When the $date value is printed the normal char string will be displayed. But you can also say:
($year, $month, $mon_day, $year_day, $week_day,
$hour, $min, $sec, $msec, $zone) = $date->crack; # call cs_dt_crack()
($days, $msecs) = $date->diff($date2);
or even
$time_t = $date->mktime; # Convert the date to a Unix time_t value.
Similar functionality is available with MONEY and DECIMAL/NUMERIC types, including all the arithmetic operators:
$money_result = $money1 / $money2;
$money_result will be the result of calling the cs_calc() with the appropriate parameters, and no loss of precision normally inherent in converting to floating point will be incurred.

Of course, nothing is free, and this native handling of dates and fixed precision types has a certain performance cost, which can in extreme cases add up to 75% processing time. This feature is therefore optional and can be selectively turned on for one or more of the data types, on a per-connection basis.

Advanced features:

DB handle Attributes:

Attributes, which control the behaviour of the CTlib module, can be set, or examined for each connection. They are stored in a special hash table that can be accessed via the DB handle:
$value = $dbh->{AttributeName};
Currently the attributes control the behaviour of DateTime, Money and Numeric data types, and the maximum number of rows that ct_sql() will let you retrieve (to avoid memory exhaustion problems).

Stored procedures, parameters and cursors:

Sybperl's CTlib support for stored procedures, RPCs and cursors is very similar to its C counterpart. To call a parametrized stored procedure, you need to call ct_command(CS_RPC_CMD, ...), then set up the parameters with ct_param(). The command is submitted to the server via ct_send(), and you use the traditional ct_results()/ct_fetch() loop to retrieve the results.
The only difference lies in how parameters are defined via ct_param(). In C, the CS_DATAFMT structure is used to define the parameter, with an additional 'indicator' value specifying whether this parameter is NULL. In Perl, we pass a hash value to the ct_param() method:
    %param = (name => '@acc',
	      datatype => CS_CHAR_TYPE,
	      status => CS_RETURN,		# This is an OUTPUT param
	      value => 'CIS 98941',
	      indicator => CS_UNUSED);
    $dbh->ct_param(\%param);	# Pass a reference to the hash...
Executing an RPC becomes (without error checking):
$dbh->ct_command(CS_RPC_CMD, "t_proc", CS_NULLTERM, CS_NO_RECOMPILE);

%param = (name => '@acc',
	  datatype => CS_CHAR_TYPE,
	  status => CS_INPUTVALUE,
	  value => 'CIS 98941' ,
	  indicator => CS_UNUSED);


# Alternate technique: pass an anonymous hash...
$dbh->ct_param({name => '@date',
	        datatype => CS_DATETIME_TYPE,
	        status => CS_RETURN,
	        value => '950529' ,
	        indicator => CS_UNUSED});
while($dbh->ct_results($restype) == CS_SUCCEED)
    next unless $dbh->ct_fetchable($restype);

    while(%dat = $dbh->ct_fetch(CS_TRUE))
	foreach (keys(%dat)) {
	    print "$_: $dat{$_}\n";
The key to the hash table returned by ct_fetch() is the name of the output parameters to the stored procedure ('@date' in this case).

If the above code seems a bit complicated, you can often get away with writing something like this:

	@rows = $dbh->ct_sql(<<'SQL_EOF');
	   declare @date datetime
	   select @date = '950529'
	   exec t_proc 'CIS 98941', @date out
This uses a technique that shell programmers know well: the 'here' document - everything between the lines containing the SQL_EOF tags is converted to a string which is used as a parameter to the ct_sql() call.

The @rows variable will contain:

As you can see, very few lines of code can produce amazing results!

Cursors are programmatically complex, and their use in Perl remains a bit arcane. The steps that are required to setup a cursor operation are the same as in C:

    $dbh->ct_cursor(CS_CURSOR_DECLARE, 'cursor_eg',
	            'select * from master.dbo.sysprocesses',
    $dbh->ct_cursor(CS_CURSOR_ROWS, undef, undef, 5);
    $dbh->ct_cursor(CS_CURSOR_OPEN, undef, undef, CS_UNUSED);
    while($d->ct_results($restype) == CS_SUCCEED) {
There are two support functions to allow you to do the same things with cursor in Perl as in C: [maybe some example cursor code....]

Using Perl references:

I mentioned Perl references briefly earlier in this article: they behave in a manner similar to pointers in C in that they let you reference a data item through an indirect mechanism. References are often used in Perl to build complex objects, quite often involving nested hashes (associative arrays).
We can illustrate one use of references with the following Perl subroutine, which loads an entier table into memory, while keeping it easily accessible via the primary key:
sub load_table {
    my($dbh, $table, $key) = @_;
    my(%out, %dat, $restype);

    $dbh->ct_execute("select * from $table");
    while($dbh->ct_results($restype) == CS_SUCCEED) {
        next unless $dbh->ct_fetchable($restype);
        while(%dat = $dbh->ct_fetch(CS_TRUE)) {
             $out{$dat{$key}} = {%dat};
    return %out;
We could use this subroutine to load the sysusers table into memory:
%sysusers = &load_table($dbh, 'sysusers', 'suid');
Now, if we know the suid of a user, we can find their name immediately:
$name = $sysusers{3}->{'name'};
print "$name\n"; # Prints 'mpeppler'
We can 'walk' over all the elements of the table using this sort of construct:
    foreach $suid (keys(%sysusers)) {
        foreach $col (keys(%{$sysusers{$suid}})) {
	    print "$suid - $col: $sysusers{$suid}->{$col}\n";

1 - name: dbo
1 - environ: 
1 - uid: 1
1 - gid: 0
1 - suid: 1
3 - name: mpeppler
3 - environ: 
3 - uid: 3
3 - gid: 16384
3 - suid: 3
This makes it really easy to load small look-up tables into memory and still be able to access the columns via an index. This technique can off course be expanded and/or applied in other situations where you wish to keep or accumulate information in memory through structured variables.

Why should I use CTlib...

...when the DBlib module seems perfectly suitable?

Although I am not advocating converting all existing sybperl code to use the new CTlib module, I think that using it should be considered for new projects. Why? For one thing, Sybase has stated that although DBlibrary will be supported in the future, it will no longer be enhanced to support new functionality. Secondly, the Client Library programming model is very clean, and Sybperl's implementation of the API makes it really easy to use.

The Client Library syntax has been unified and simplyfied compared to DBlibrary, and the number of API calls has been greatly reduced: the Sybase::DBlib module implements 93 calls, the Sybase::CTlib module fewer than 20, with similar functionality. In addition, native handling of Dates and other special data types can make things a lot easier when writing scripts.

Missing features in the current version of the Sybase::CTlib module includes access to the bulk copy library (it is available in the Sybase::DBlib version), asynchronous programming and dynamic SQL. The bulk copy library will certainly be added in the near future.

Sybperl in general, and the Sybase::CTlib module in particular have to a certain extent been blessed by Sybase, and are becoming supported products - as previously noted, web.sql uses a version of the Sybase::CTlib module as it's underlying engine. Sybperl may not be for everybody as it requires understanding a number of sometimes complex programming issues, but it beats using shell scripts or writing custom C code hands down.

Where do I get sybperl?

Sybperl is available from CPAN, the Comprehensive Perl Archive Network in directory "CPAN"/authors/Michael_Peppler. CPAN is mirrored extensively around the world, but the easiest starting point is to go to and select the CPAN link. is the center of all things perl, by the way, and has an enormous amount of information.

Perl is available from the same source, in directory "CPAN"/src/5.0 The best Perl documentation are the 'Camel' and 'Llama' books (Programing Perl, Wall, Schwartz & Christiansen, O'Reilly, and Learning Perl, Schwartz, O'Reilly). There is a sybperl homepage at

This article was first published in the Sybase Server in the USA, 1996. Copyright is owned by Sybase, Inc.
Michael Peppler
Last modified: Mon Dec 6 06:06:31 EST 2004