Make Your Clunky Code Sizzle

Take advantage of the Synergy Language Profiler
By Tod Phillips, Synergy/DE PSG Consultant

I was recently called to a customer’s site to investigate poor performance in one of their routines. It seemed that a function that is regularly called remotely via an xfServerPlus/xfNetLink configuration was misbehaving: a lot of CPU time was being invested in the routine’s execution, and small sets of records were regularly taking a long time to be returned.

Donning my Sherlock Holmes hat, a trench coat, and a magnifying glass (all digital, of course), I got to sleuthing. After some preliminary examination of the source code, I realized that the “usual suspects” that cause sluggishness simply weren’t present.

With a couple of quick commands at the system prompt and a speedy recompile of the routine in question, I re-ran our test program. Another couple of command line keystrokes and a neatly summarized output had appeared on the screen. “Aha!” I exclaimed, pointing at the monitor. “There’s your problem right there!”

Stunned silence descended on the cubicle. Even the hum from the solitary PC sitting nearby seemed to dip in volume. At last, someone found their voice and said, “That’s amazing…how did you do that?”

“Elementary,” I replied.

The skeptical silence that followed was considerably shorter—so short, in fact, that I began to wonder if things hadn’t been so quiet simply because everyone was busy reading the output. “No really,” he said. “What was that?”

With that, my dreams of being immortalized as a Super Systems Shamus were crushed. The game was up; they’d seen too much and now I’d have to come clean. “That was the Synergy Language Profiler, and it’s a beautiful thing.”

What is the Profiler?

“The Profiler is one of the most powerful tools available to Synergy developers, but few people know about it, so don’t feel too bad,” I said. “Really, you can think of it as an automated ‘debug mode’ that steps through your code, counts the number of XCALLs, analyzes the amount of CPU time each takes, and logs the results.”

In this case, the Profiler had listed every XCALL performed by the routine in question, and we quickly identified several commonly used internal subroutines that did date calculations. All told, in the course of reading and filtering perhaps 500 records, our “misbehaving” routine had called for date calculations a dizzying 77,288 times. The Profiler also showed us the cost of so many XCALLs: 28.3% of the total CPU time.

DBLOPT 42: The answer to the Great Question?

If you’ve ever questioned what’s going on under the hood each time you fire a routine—whether it’s because you’re disappointed in its performance or have assumed that slow execution times are due to complexities in its operations—then the Profiler is for you. Actually, forget that. The Profiler is for everyone, and it should be a staple of your development diet.

And it’s easy to use. I wasn’t lying when I said it was just a few keystrokes at the command line (though I may have taken a few liberties in describing the conversation). In a nutshell, I examined the DBLOPT environment variable to see what (if any) settings it already had. Then, I simply added option 42 to the list, which enabled profiling on both the current subroutine and all XCALLs performed by it.

At that point, all that was required was a recompile and relink of the code. At the moment the questionable routine was fired by our test program, the Profiler took over and began logging everything that happened. After the test program exited, I issued one more command at the command prompt:

> DBR DBLDIR:PROFILE

This caused the profile.dat file, newly created by the Profiler in the current directory, to be transformed into a more user-friendly summary called profile.lst. (I should also point out that once I had my baseline profile, I removed DBLOPT 42, as I didn’t want the overhead of the Profiler running unless I needed it.)

The profile listing: A summary of your route?

If you picture your program as a car that’s taking your user from Point A to Point B, then the Profiler is the navigation system. The listing it provides is the route summary, and it shows every turn the car took along the way—right down to showing that you went through the same roundabout 12 gajillion times (a gajillion being a number somewhere between “excessive” and “mind-numbingly huge”).

Just so, it was the examination of profile.lst that gave me the information regarding the prolific use of date calculations (as well as the names of several other suspect routines). The profile shows the total number of CPU ticks for the main subroutine and then goes into a line-by-line analysis of each routine that is XCALLed. It includes the subroutine name, the number of times it is called (iterations), the total CPU time it accumulated, and the CPU time per iteration.

The profile even sorts all of the routines in descending order based on total CPU time, so it’s easy to detect where most of a routine’s overhead exists; the biggest culprits are right at the top of the list. We could quickly spot problems in the misbehaving routine, and we were able to fix many of them in a matter of hours. By using the first profile.lst file and saving a copy of it as baseline_profile.lst, we were able to run comparative profiles and see a drop of 55% in CPU ticks within a day. A stopwatch showed us the code was executing 2-3 times faster…and we hadn’t even touched the other suspicious subroutines that our code was calling!

>> continue