05 September, 2013

DTraceToolkit 0.XX Mistakes

You learn more from failure than you do success. In this post, I'd like to list my mistakes and failures from versions 0.01 to 0.99 of the DTraceToolkit, as lessons to learn from. There is much detail here that may only interest a small number of people. And if you are a recruiter, I can assure you that this won't interest you at all, so please stop reading. In fact, I'm another Brendan Gregg – the one who makes many mistakes – since there are several of us, after all. :-)

As a summary of the lessons learned, skip to the end for the "Learning From Mistakes" section.

Background

By 2005 I was sharing a small collection of DTrace scripts – tools – on my homepage. These were popular with a wide audience, including those who didn't have time to write DTrace scripts from scratch, and those who weren't programmers anyway. For these casual DTrace users I created a toolkit, to:

  • Give people useful tools to run immediately
  • Present good examples of DTrace to learn from

I was doing performance consulting and training, and had a keen sense of what problems needed solving. So creating useful tools was easy: I already had a laundry list of needs. As for good examples: I made my own coding style and stuck to it. I also created documentation for every script: a man page, and an file showing example usage. And I tested every script carefully.

The toolkit has been successful, helping people solve issues and learn DTrace. The scripts have also have been included, as tools, in multiple operating systems by default, including Mac OS X and Oracle Solaris.

Mistake 1. Missing Scripts

The observability coverage was a little uneven, as it was based on my performance experience at the time. Some areas I didn't write enough scripts for, and some areas I missed by mistake: for example, socket I/O from the syscall interface. I should have drawn a functional diagram of the system and kernel internals, to look for areas that were lacking scripts and observability.

I did cover the missing areas when I wrote numerous new scripts for the DTrace book, which are shared online as its own collection. But I haven't figured out how to include them in the DTraceToolkit, as they were created as example scripts (that likely need tweaking to run on different OSes) and not robust tools. So, now I have two collections of scripts, and over 400 scripts total.

Mistake 2. Too Many Scripts

I think I have too many scripts for the target audience: casual users. I'm reminded of the function/complexity ratio described in The Mythical Man Month, where the addition of some function came at the cost of much more complexity, making computers harder to learn and use. Casual users may only use a handful of the scripts, and having extra scripts around adds complexity to browse. This also becomes a maintenance burden, and testing can fall behind (it already has).

I did use a directory hierarchy to organize the scripts, with "top scripts" in the top level directory, which I think helps. I've also been thinking of creating a separate collection for performance engineers which contains every script (400+), and reducing the DTraceToolkit to 100 or less for the casual users.

Mistake 3. Inventing My Own Style

I should not have invented my own style to begin with. This included double asterisks for block comments, and no spaces after commas. From the original iosnoop:

#!/usr/sbin/dtrace -s
/*
** iosnoop.d - A program to print I/O events as they happen, with useful
** details such as UID, PID, inode, command, etc. 
** Written in DTrace (Solaris 10 build 51).
**
** 29-Mar-2004, ver 0.60.  (check for newer versions)
[...]

At the time I thought it was neat. I now think it looks bad. After about 50 scripts, someone from Sun suggested I follow "cstyle", as that was the standard for Sun's C code. This seemed to be a better idea than my own invented style, but, I had already written 50 scripts! I had to rewrite all the scripts to be cstyled - a nusiance. I should have asked others in the DTrace community before creating my own style, as many would have made the same suggestion: use cstyle.

Mistake 4. Complex scripts

Some of the scripts are too long and too complicated. These make poor examples to learn DTrace from, which was one of the goals of the toolkit. They also are a pain to maintain. In some cases it was necessary, since the point of the script was to resemble a typical Unix *stat tool, and therefore needed shell wrapping (getopts). Eg, iosnoop, iotop, and execsnoop. But in other cases it wasn't necessary, like with tcpsnoop.d.

One reason tcpsnoop.d is complex is the provider it uses (next mistake), but another reason was the aim. I wanted a tool that emitted the exact same output as snoop (Solaris tcpdump), but decorated with a PID column and other kernel context that can't be seen on the wire. This turned out to be very complex without a stable tcp provider to use, especially correctly matching the TCP handshake packets, RST packets from closed port SYNs, and other TCP behavior. I should have stopped and gone back to the problems this would solve, primarily, identifying TCP sessions to their PID, and quantifying their workload. That could have been solved with tracing send and receive packets alone. Other objectives, like tracing closed port packets, should have been handled by separate scripts. Doing everything at once was ideal, but not practical at the time.

The DTrace book taught me discipline for creating short, simple, and useful scripts, that fit on half a textbook page. I think that's a better approach where possible, which may mean several small scripts for specific problems, instead of one long script. That will help with readability, maintenance, and testing.

Mistake 5. The fbt Provider

I suspected I was making this mistake at the time, and I was. The function boundary tracing (fbt) provider does dynamic tracing of kernel functions, arguments, return values, and data structures. While the observability is amazing, any script that is fbt-based is tied to a kernel version (the code it instruments), and may stop working after an upgrade or a patch. This happened with tcpsnoop.d and tcptop, complex scripts that instrumented many TCP internals. An excerpt:

/*
 * TCP Fetch "port closed" ports
 */
fbt:ip:tcp_xchg:entry
/self->reset/
{
#if defined(_BIG_ENDIAN)
 self->lport = (uint16_t)arg0;
 self->fport = (uint16_t)arg1;
#else
[...]

The function this traces, tcp_xchg(), was removed in a kernel update. This broke tcpsnoop.d. The sad part is that this code was only necessary for tracing closed port packets (RST), and most of the time people weren't using tcpsnoop.d for that. See the earlier mistake. Had this been separate scripts, the breakage would be isolated, and, easier to fix and test.

I stopped creating fbt-based TCP scripts after tcpsnoop and tcptop, waiting for a stable DTrace tcp provider to be developed (which I ended up developing). I think it was useful to have them there, even though they usually didn't work, because it has been important to show that DTrace can do kernel-level TCP tracing. See my other post on the value of broken tools.

Mistake 6. The syscall Provider

I knew this mistake was theoretically possible, but never expected it would actually happen. The DTrace syscall provider is technically an unstable interface, due to the way it instruments the kernel trap table implementation. The DTrace manual mentioned the couple of places where this mattered, as probe names differed from syscall names. So any script that traces syscalls could break after a kernel upgrade, if the syscall trap table interface changed. Well, I'd expect syscalls to be added, but I didn't expect much changes to the existing names.

Oracle Solaris 11 went ahead and changed the syscall trap table significantly. This was to slightly simplify the code (housekeeping), making it a little easier for the few kernel engineers who maintain it. A side effect is that it broke many DTrace one-liners and scripts that customers were using. Taking that into consideration, I think this change was a mistake. It's adding more complexity for customers to endure than the function it provides (see the Mythical Man Month). In one use case, customers on Oracle Solaris 11 must include the birthday of an engineer in their scripts (a magic number) in order to have similar functionality as before. This is just madness.

Who's mistake is this ultimately? Everyone's. The DTrace syscall provider implementation was unstable to begin with, and while not a big problem at the time, you could argue that it should have been implemented stable from the very beginning. Perhaps I should not have used it so much, although without a stable alternative, much of the value of the DTraceToolkit would be lost. Avoiding it would also have made DTrace adoption more difficult, as tutorials usually begin by tracing the well-understood system calls. But the biggest mistake may be Oracle making these changes without also fixing the syscall provider. They did fix DTraceToolkit scripts and shipped them with Oracle Solaris 11, but much of the existing documentation for DTrace has not been fixed, nor is it easy to. All documentation needs to note that Oracle Solaris 11 is a special case, where the trap table differences are so large you do need to learn them. Fail.

Mistake 7. Scripts That Build Scripts

I shouldn't have written complex scripts that generate DTrace scripts. I did this once, with errinfo, a Perl program that builds and executes DTrace as a co-process. After finishing that I realized it was a bad idea, and did no more. It makes the script a difficult example to learn from and difficult to maintain. What I did do for many scripts (iosnoop, opensnoop, etc), was to break the program in two halves: the top half is a shell script for processing options, and the bottom half is the D script. This kept it simple, and maintenance wasn't too difficult.

Mistake 8. The Names iosnoop and iotop

I should have called these disksnoop and disktop, to make it clear that they are tracing physical I/O and not logical I/O (syscalls). It's confused a lot of people, and I'm sorry. I should have picked better names. iotop has since been ported to Linux (more than once), where the name has been kept, so I've indirectly confused those users as well.

Mistake 9. Not Testing iosnoop With High Enough Load

I didn't test iosnoop with a high enough disk IOPS rate. My development servers had a limited number of disks, where it ran fine. But on large scale servers, people reported "dynamic variable drops". The fix was to increase the dynvarsize tunable, which I have done in the next version (comes after the "switchrate" line):

 /* boost the following if you get "dynamic variable drops" */
 #pragma D option dynvarsize=16m

I could also change the key to the associative arrays to reduce overhead. It is currently the device and block number (this->dev, this->blk), but using the pre-translated message pointer (arg0) also works for many if not all OSes. The down side is that using arg0 is an unstable interface, which I'd rather avoid.

Mistake 10. Using io:genunix::start

I should not have used this probe at all, or, I should have added a comment to explain why I was. io:::start traces all disk I/O from the block device interface, and, NFS client I/O. Specifying "genunix" as the module name matched the disk I/O only, which is what I wanted, as NFS I/O was from a different module. But that's an unstable way to do it, and these scripts didn't work on other OSes that didn't have "genunix" as the module name. In later versions of my scripts, I removed the genunix, which means they also match NFS I/O.

I explained this in the DTrace book, and mentioned the stable fix: using io:::start with the predicate /args[1]->dev_ name != "nfs"/.

Mistake 11. Using Global Associative Arrays

These can become corrupted, and did for the cputimes script. This one in particular:

        /* determine the name for this thread */
        program[cpu] = pid == 0 ? idle[cpu] ? "IDLE" : "KERNEL" :
            OPT_all ? execname : "PROCESS";

That's saving a string to a global associative array, program[], which is keyed on the CPU id. Unlike aggregations or thread-local variables, these are not multi-CPU safe. cputimes would sometimes print strings that were garbled. Fortunately this wasn't subtle corruption, but was pretty obvious.

Mistake 12. dapptrace

I should have warned about the overheads of dapptrace more clearly. With the -a option, it traces every user-level function, which for some applications will significantly slow the target. Without -a, it traces just the application text segment, which, depending on the application, can also incur high overhead. I warned about it in the example file, but I should have warned about it in the script and man page as well.

Maybe I shouldn't have even written this tool. With applications, I'm usually tracing a specific function or group of functions, which minimizes the overhead.

Mistake 13. Missing Units On Output

I should have always put units on the output. For example, running iosnoop with completion times (-t, for the TIME column) and I/O delta times (start to done, -D, for the DELTA column):

# iosnoop -Dt
TIME           DELTA     UID  PID D    BLOCK   SIZE      COMM PATHNAME
633389165137   1903        0    1 W 749429752   4096   launchd ??/T/etilqs_sa...
633389165187   1461        0    1 W 385608448   4096   launchd ??/log/opendir...
633389165215   2018        0    1 W 749429656   8192   launchd ??/T/etilqs_sa...
[...]

Where they microseconds or milliseconds? I forget. While the units are documented in the script, the USAGE message, and the man page, it's really handy to have it on the output as well. The next version does this:

# iosnoop -Dt
TIME(us)       DELTA(us) UID  PID D    BLOCK   SIZE      COMM PATHNAME
[...]

All scripts should include units on the output.

Mistake 14. Capital Directories

I'm not sure starting directories with a capital letter was a great idea. It helped differentiate the scripts from the directories, and listed directories first. eg:

~/DTraceToolkit-0.99> ls
Apps            Java            Perl            User            install     
Bin             JavaScript      Php             Version         iopattern   
Code            Kernel          Proc            Zones           iosnoop     
Cpu             License         Python          dexplorer       iotop       
Disk            Locks           README          dtruss          opensnoop   
Docs            Man             Ruby            dvmstat         procsystime 
Examples        Mem             Shell           errinfo         rwsnoop     
FS              Misc            Snippits        execsnoop       rwtop       
Guide           Net             System          hotkernel       statsnoop   
Include         Notes           Tcl             hotuser                      

Well, that seems pretty useful. But then, so is aliasing ls to "ls -F", or the colorized version of ls. Those techniques won't list the directories first, but they will differentiate them. This may be a choice between having directories listed first and together, or, not needing to hit the shift key so much. Or, I rearrange things to not mix scripts with directories.

Mistake 15. Not Organizing For Other OSes or Kernel Versions

I didn't think much about organizing the toolkit to support multiple OSes, as those ports hadn't happened yet. I should have thought harder, since around the same time DTrace was being released as open source, and I honestly expected it to be on Linux by the end of 2005. All I did do was put the OS type and version at the top of the scripts (better than nothing). Organizing the hierarchy is a problem I need to wrestle with now.

Mistake 16. Not Testing New Kernels

I didn't have a test plan for handling numerous new kernel versions. In part because I didn't yet know how much of a problem it would be, but also because I didn't expect the DTraceToolkit to be around for that long (see next mistake). When I began writing the DTraceToolkit, there was only one kernel version to test. Soon there were two, three, and then more than I had servers at home to test on. Pretty quickly people were running the DTraceToolkit on Solaris versions that I hadn't tested at all, and I didn't have the spare equipment to test.

I did get a lot of help from Stephan Parvu, who automated testing and was able to quickly do a smoke test on several kernel versions and platforms. But as the years went by, Sun kept releasing new Solaris versions, and staying on top was hard. There have been 12 versions of Solaris 10 so far (last was Jan this year), on two different platforms (x86, SPARC), meaning there are 24 OS versions to test just for Solaris 10 coverage.

One problem was that I didn't have ready access to the different versions of Solaris to test: ideally, I'd want 24 online servers spanning every version of Solaris 10, and root logins. At one point someone at Sun was going to contribute a pool of test servers to the OpenSolaris project. Unfortunately, I had to sign the OpenSolaris contributor agreement before I could use them. I didn't, and that turned out later to be a good decision.

The other problem was that testing was very time consuming, and was most of the script development time. The smaller scripts would take about 2 hours to develop: 20 minutes to write the script, 20 minutes to write the example file and man page, and then 80 minutes to test it and fix bugs found during testing. Larger scripts took longer - some took weeks. It's easy to write a DTrace script that produces numbers, but it's harder produce accurate numbers. Testing often involved running a variety of "known workloads" and then seeing if the DTrace tool measured the workloads exactly, and investigating when it didn't.

Testing is even harder now than before, as there are multiple OSes to test.

Mistake 17. Not Planning For Success

I didn't anticipate the DTraceToolkit would be this successful, and that it would be used years later on multiple OSes. I half expected Sun to release some new DTrace-related thing, like a DTrace GUI, that would make the toolkit redundant. If I had expected it to last this long, I could have planned testing kernel versions and multiple OSes better. I guess this is a lesson for any project: what if it is really successful in the long term? What architectural choices should be made now, that won't be regretted later?

Mistake 18. Private Development and Testing

I should have made the development version of the DTraceToolkit public, even though it contained partially tested scripts. This would have allowed others to easily participate in testing these scripts, accelerating their development. Some people did help test, but this required me to email around tarballs, when I could have just had a public URL. It didn't seem like a good idea in 2005 when I created the toolkit, but nowadays it is commonplace to find both stable and unstable versions of a project online.

Mistake 19. Soliciting Contributions From Beginners

I encouraged everyone to send me scripts, which was a mistake. Most of the submissions were from DTrace beginners, some noting "this is my first ever DTrace script". Many of these produced invalid or misleading metrics. Most of the beginners were also inexperienced programmers, who didn't test or know how to test. And many didn't follow the toolkit coding style, or any programming style. I was sympathetic as they did have good intentions, and they were trying to help my project. So I would try to explain how to improve or test their script. At one point I also wrote websites on style, dos and don'ts, and hints & tips.

Problem was, most beginners didn't really understand what I was talking about, unless I spent serious time explaining. Since they had already contributed their time, some felt that I was obligated to return the favor, and explain, and keep explaining, until they understood everything. This took hours, many more than it would take to write the scripts myself. And that time didn't pay off: once the beginner realized I was serious about accuracy and testing, or the real complexity of what they were doing, they usually lost interest.

A common misconception with beginners was that the hard work was in writing the script, and that any extra work, like testing and documentation, was minor and could just be done by me. Testing is the hard work, and is where most of the development time is spent.

I should have only encouraged experienced software engineers. Some of whom did send me scripts, which I included. These were often accurate, tested, and styled to begin with. They were usually created when the engineer was debugging a particular software issue, and had created a DTrace script to solve it after reading the source code and learning all the nuances.

Mistake 20. Joining Sun

I joined Sun in late 2006, and in terms of the DTraceToolkit, this was a mistake. My work on the DTraceToolkit mostly stopped, and not just because I was busier. It's not easy or short to explain why. There are a number of pros and cons when working on an open source project like this, and before Sun it seemed that the pros outweighed the cons. Pros included things like helping fellow sysadmins – which I really enjoyed – and creating tools that I'd use in my own career. But there were various cons too, and Sun added more, to the point where it was hard to justify volunteering my spare time for this project.

Just as an example – and only because it's the easiest to summarize – at one point Sun legal became involved, and investigated whether Sun could claim ownership of the DTraceToolkit. As an employee, I was now in the crosshairs of my own company's lawyers. It wasn't fun, and they tried, very hard, for months. Thanks again to those that helped eventually put a stop to that!

I don't have hard feelings towards the lawyers. In fact, I admire how hard they were working for Sun – just like I was, so on some level it was understandable. But other events were much less understandable, and much worse. Again, there's no easy or short way to explain it all, and the problems weren't limited to the DTraceToolkit - DTrace itself came under friendly fire at Sun as well.

Instead of working on the DTraceToolkit I've been writing books, which have similar incentives: I get to help other people in the industry, especially sysadmins, as well as create references for myself, which also include tools. I never stopped contributing - I changed the medium to do so.

Learning From Mistakes

I need to:

  • Get the DTraceToolkit on github.
  • Organize it to support different OSes.
  • Update existing scripts and add missing scripts.
  • Split the DTraceToolkit into two: a small number of well-tested scripts for casual users, and a large number of less-tested scripts for performance engineers.

And by less-tested I do mean probably broken. This is embracing the reality of dynamic tracing, which exposes interfaces that are not stable. That doesn't make the scripts useless, rather, these scripts need to be treated differently: they are examples of solving problems (even if they don't work), not ready-to-go tools.

The split can be either in the DTraceToolkit as subdirectories, or as separate toolkits. This addresses several of the mistakes listed above. I hope to follow up this post with another explaining how the split is done, which can refer to this post as background.

And the final lesson I should learn: I should stop writing books, to get my spare time back! (My colleagues wouldn't believe that for a second.) Ok, well, maybe I'll a break from writing very long books, how about that? :-)


17 July, 2013

Systems Performance: Enterprise and the Cloud

I wrote another book, Systems Performance: Enterprise and the Cloud. It will be out this year, and I'm really looking forward to it helping people improve performance of their systems. While the book has been drafted, I'm not sure how many final pages it will be, as it still needs to finish the composition and layout stages of publication.

All my spare time has been consumed with it (and before it, the previous DTrace book), which is why this blog has been quiet for many years.

31 December, 2011

Still Writing

I haven't posted here in a few years, but I have been busy writing material, particularly for:
  • The DTrace book: which has over 1,000 pages, much of it new content. This ate over a year of my spare time (and Jim's).

  • dtrace.org/blogs/brendan: my professional blog, for posts related to my work (although it's still mostly a spare time project). This was formerly on blogs.oracle.com/brendan, and before that on blogs.sun.com/brendan.
For more of my recent writing, I've updated a summary under the Documentation section on my homepage, which includes posts from my dtrace.org blog and other places.

I'll post more here in the coming year: this blog is for purely personal posts and projects (like the DTraceToolkit).

24 June, 2008

DTrace in New York

Back in February I gave several DTrace talks in New York, including one at the New York OpenSolaris User Group meeting (NYCOSUG). I used an updated slide deck and was asked to put the PDF on my blog; I think Isaac must have beaten me to it and put it here (thanks!). I did intend to blog about this in case anyone was looking - sorry for the delay.

The NYCOSUG had a good turn out and asked some great questions, allowing me to deviate from the prepared slides and cover other things of interest (which is the value of an in-person presentation.) After the presentation I realised there was one point I could have explained better, which would make an interesting blog post.

I started with the following simple demos - the point of these is to build on something commonly understood (such as the behaviour of fork() and exec()), to introduce something new - DTrace.

Tracing exec():
# dtrace -n 'syscall::exec*: { trace(execname); }'
dtrace: description 'syscall::exec*: ' matched 4 probes
CPU ID FUNCTION:NAME
0 98087 exece:entry bash
0 98088 exece:return ls
^C

In the above output, we traced an exece() system call - printing the current process name when we entered and returned from that system call. That process name changed from "bash" to "ls" (I executed "ls -l" in another window), which is what exec() does - replaces the current process image with another.

While unsuprising, the significance is that we are able to dymanically trace this kernel activity whenever we would like, along with thousands of other kernel events. I could, for example, trace the time taken for exec() to execute; or the exit status when exec() returned along with the error code; I can also trace the internal operation of exec() with enough detail to fill hundreds of pages (I just counted 47396 lines of output when tracing every kernel function entry and return during exec()).

Now to trace fork():
# dtrace -n 'syscall::fork*: { trace(pid); }'
dtrace: description 'syscall::fork*: ' matched 6 probes
CPU ID FUNCTION:NAME
0 98227 forksys:entry 87417
0 98228 forksys:return 90769
0 98228 forksys:return 87417
^C

The above system call has one entry and two returns - which is what we expect from the fork() family.

Simple as this is, some interesting behaviour is already visible. Note that the parent returned before the child? On the fork() entry, the parent's PID is traced (87417), however the child PID (90769) appears first on return.

It's possible that the output could be shuffled due to how DTrace uses per-CPU buffers to minimise performance impact; to double check, add a "timestamp" column and post sort:

# dtrace -n 'syscall::fork*: { printf("%d %d", timestamp, pid); }' -o /tmp/out.dtrace
dtrace: description 'syscall::fork*: ' matched 6 probes
# sort -n +3 /tmp/out.dtrace
CPU ID FUNCTION:NAME
1 98227 forksys:entry 268361844462135 87417
1 98228 forksys:return 268361844960455 90968
0 98228 forksys:return 268361844965924 87417

I asked the audience - why does the child return from fork() before the parent? I added that this was a very difficult question!

Someone responded to say that this was how all operating systems worked - the parent process waits for the child to complete. I said I was just tracing fork() and the parent could be scheduled first - but deliberately isn't, and explained why. My answer left them confused - and it struck me afterwards that I should have explained this better.

Consider this:
# dtrace -n 'syscall::fork*:,syscall::wait*: { trace(pid); }'
dtrace: description 'syscall::fork*:,syscall::wait*: ' matched 10 probes
CPU ID FUNCTION:NAME
0 98227 forksys:entry 87417
0 98228 forksys:return 91088
0 98228 forksys:return 87417
0 98163 waitsys:entry 87417
0 98164 waitsys:return 87417
^C

In the above output we can see both fork() and wait(), and we can discuss behaviour such as the parent process waiting for the child to complete (since I was in a shell running foreground commands.)

But I was actually asking a much deeper question, that of thread scheduling immediately after the fork() system call, and before the parent has called wait(). Immediately after fork() you have two threads - which should go on-CPU first? The parent, so that it can get to wait() sooner and before the child may have exited? Or is there a reason to schedule the child first?

As DTrace shows, the child is getting scheduled first, and the reason is one of performance. The source code explains in uts/common/disp/ts.c :

/*
* Child is placed at back of dispatcher queue and parent gives
* up processor so that the child runs first after the fork.
* This allows the child immediately execing to break the multiple
* use of copy on write pages with no disk home. The parent will
* get to steal them back rather than uselessly copying them.
*/
static void
ts_forkret(kthread_t *t, kthread_t *ct)

The fork() system call creates a clone of the parent, but rather than copy all memory pages to a new address space (which would add significant latency during process creation), Solaris bumps a reference counter on those memory pages to remember that two process refer to the same data. If one writes to a memory page later, this triggers a "copy on write" to create a private writable copy for that process. This means that expensive memory copies are only performed when needed - or if needed. Since a child process is likely to call exec(), it is likely to simply drop many existing memory references for the new process image, so copying those bytes would have been wasted cycles anyway.

However if the parent is scheduled first - before the child has had a chance to exec() - then the parent may continue writing to its address space, triggering copy on writes. Then the child executes, calls exec(), and drops those newly copied pages anyway - which were copied in vain. To avoid this, the child is scheduled first - to call exec() as soon as possible, as described in the comment above.

I learned about this behaviour when reading Solaris Internals 1st edition; but that was a time before DTrace and OpenSolaris. It's great that we all can now both read the code, and use DTrace to see it in operation.

18 February, 2008

DTraceToolkit in MacOS X

Apple included DTrace in MacOS X 10.5 (Leopard), released in October 2007. It's great to have DTrace available in MacOS X for its powerful application and kernel performance analysis. To think that there is now another kernel we can examine using DTrace is exciting - it's like discovering a new planet in the solar system.

Apart from kernel analysis, DTrace also improves general usability by answering every day questions like: why are my disks rattling? or why does my browser keep hanging? Although, your average user may not write DTrace scripts to answer these questions themselves (it's better if they do), but instead use prewritten scripts.

MacOS X includes a collection of DTrace scripts in /usr/bin, mostly from the DTraceToolkit:
leopard# grep -l DTrace /usr/bin/*
/usr/bin/bitesize.d
/usr/bin/cpuwalk.d
/usr/bin/creatbyproc.d
/usr/bin/dappprof
/usr/bin/dapptrace
/usr/bin/diskhits
/usr/bin/dispqlen.d
/usr/bin/dtruss
/usr/bin/errinfo
/usr/bin/execsnoop
/usr/bin/fddist
/usr/bin/filebyproc.d
/usr/bin/hotspot.d
/usr/bin/httpdstat.d
/usr/bin/iofile.d
/usr/bin/iofileb.d
/usr/bin/iopattern
/usr/bin/iopending
/usr/bin/iosnoop
/usr/bin/iotop
/usr/bin/kill.d
/usr/bin/lastwords
/usr/bin/loads.d
/usr/bin/newproc.d
/usr/bin/opensnoop
/usr/bin/pathopens.d
/usr/bin/pidpersec.d
/usr/bin/plockstat
/usr/bin/priclass.d
/usr/bin/pridist.d
/usr/bin/procsystime
/usr/bin/runocc.d
/usr/bin/rwbypid.d
/usr/bin/rwbytype.d
/usr/bin/rwsnoop
/usr/bin/sampleproc
/usr/bin/seeksize.d
/usr/bin/setuids.d
/usr/bin/sigdist.d
/usr/bin/syscallbypid.d
/usr/bin/syscallbyproc.d
/usr/bin/syscallbysysc.d
/usr/bin/topsyscall
/usr/bin/topsysproc
/usr/bin/weblatency.d

That's 44 DTraceToolkit scripts, plus plockstat from Solaris 10. While the DTraceToolkit now has over 200 scripts, it makes sense to pick out the most useful scripts for inclusion in /usr/bin.

Popular scripts such as iosnoop can now be run by MacOS X users:

leopard# iosnoop
UID PID D BLOCK SIZE COMM PATHNAME
501 130 R 31987472 40960 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 7879952 8192 Terminal ??/SearchManager.nib/keyedobjects.nib
501 130 R 32132304 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132528 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32047696 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132592 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32131512 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32033296 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32044488 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32045064 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32131344 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32048680 16384 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132544 8192 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32049296 12288 Terminal ??/dyld/dyld_shared_cache_i386
-1 0 W 32482848 86016 kernel_task ??/vm/swapfile2
-1 0 W 32483040 135168 kernel_task ??/vm/swapfile2
501 130 R 32044672 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132656 12288 Terminal ??/dyld/dyld_shared_cache_i386
[...]

The man pages are conveniently included in /usr/share/man.

I had been making preperations in the latest DTraceToolkit (0.99) for MacOS X DTrace, such as putting an "OS" field into the man pages and figuring out how to support different versions of the same script (tcpsnoop_snv, etc). Hopefully many scripts will run on both Solaris and MacOS X (especially if they use stable providers), however I expect there will be some that are specific to each. Now that QNX DTrace also exists, there is additional need for identifying OS specifics in the DTraceToolkit.

It's been great news for DTrace, Sun and Apple - who have not only gained the best performance and debugging tool available, but also the existing DTrace community.

17 February, 2008

Browsable DTraceToolkit

Stefan Parvu has created browsable HTML versions of the DTraceToolkit on the DTT test page. See DTraceToolkit ver 0.99 to browse that version.

A goal of the DTraceToolkit is to provide documented examples of DTrace scripting, in addition to what is available in the DTrace Guide. However these examples have been reaching a limited audience of those who download, unzip, and browse through the text files.

Now that the DTraceToolkit is browsable online, its contents can be found by internet search engines. This should help people not only find examples of DTrace usage, but also solutions to some common observability problems.

There have been a few other items of DTraceToolkit news which I'll blog about soon. Please excuse my infrequent blog postings - I've been busy since joining Sun on a particular project which consumes most of my spare time. It will be worth it, which should be clear once I can start posting about it on my Sun blog.

05 October, 2007

DTraceToolkit ver 0.99

I've released DTraceToolkit ver 0.99 - a major release. If you haven't encountered it before, the DTraceToolkit is a collection of opensource scripts for:
  • solving various troubleshooting and performance issues
  • demonstrating what is possible with DTrace (seeing is believing)
  • learing DTrace by example

The DTraceToolkit isn't Sun's standard set of Solaris performance tools, or, everything that DTrace can do. It is a handy collection of documented and tested DTrace tools that are useful in most situations. It's certainly not a substitute for learning DTrace properly and writing your own custom scripts - although it should help you do that by providing working examples of code. It is certainly better than not using DTrace at all, if you didn't have the time to learn DTrace.

DTraceToolkit 0.96

18 months ago I released version 0.96 of the DTraceToolkit. The Bin directory, which contains symlinks to all the scripts, looked like this,


DTraceToolkit version 0.96

Click for a larger image. That collection of scripts provides various views of system behaviour and of the interaction of applications with the system. A general theme, if there was one, was that of easy system observability - illuminating system internals that were either difficult or impossible to see before. There were 104 scripts in that release - although that is just the tip of an iceberg.

DTraceToolkit 0.99

In the last 18 months, DTrace has been expanding its horizons through the creation of numerous new DTrace providers. There are now providers for Java, JavaScript, Perl, Python, Php, Ruby, Shell and Tcl, and more on the way. The iceberg of system observability is now just one in a field of icebergs, with more rising above the surface as time goes by. It's about time the DTraceToolkit was updated to refelect the trajectory of DTrace itself - which isn't just about system observability - it is the observability of life, the universe, and everything.

This new version of the DTraceToolkit does has a theme - that of programing language visibility, which covers several of the new DTrace providers that now exist. The Bin directory of the toolkit now contains 230 scripts, and looks like,


Below I've highlighted most of the new contents, grouped by language that they trace,


DTraceToolkit version 0.99

I've placed language scripts in their own subdirectories, each of which include a "Readme" file to document suggestions for where to find the provider and how to get it installed and working. Some of these providers do require downloading of source, patching, and compiling, which may take you some time. Both C and C++ are supported by numerous scripts, but don't have a prefix to group them like the other languages do; I hope to fix this in the next release, and actually have C and C++ subdirectories.

DTraceToolkit 0.99 change summary:
  • New script categories: Java, JavaScript, Perl, Php, Python, Ruby, Shell and Tcl
  • Many new scripts (script count has doubled)
  • Many new example files (since there is one per script)
  • A new Code directory for sample target code (used by the example files)
  • Several bug fixes, numerous updates
  • Updated versions of tcpsnoop/tcptop for OpenSolaris/Solaris Nevada circa late 2007

For each language, there is about a dozen scripts to provide:
  • general observability (such as function and object counts, function flows)
  • performance observability (on-cpu/elapsed times, function inclusive/exclusive times, memory allocation)
  • some paths for deeper analysis (syscall and library tracing, stacks, etc)

Screenshots

Now for some examples. I'll demonstrate PHP, but the following applies to all the new languages supported in the toolkit (look in the Example directory for the language that interests you). Here is a sample PHP program that appears in the toolkit as Code/Php/func_abc.php,
/opt/DTT> cat -n Code/Php/func_abc.php 
1 <?php
2 function func_c()
3 {
4 echo "Function C\n";
5 sleep(1);
6 }
7
8 function func_b()
9 {
10 echo "Function B\n";
11 sleep(1);
12 func_c();
13 }
14
15 function func_a()
16 {
17 echo "Function A\n";
18 sleep(1);
19 func_b();
20 }
21
22 func_a();
23 ?>

For general observability there are scripts such as php_funccalls.php, for counting function calls,
# php_funccalls.d 
Tracing... Hit Ctrl-C to end.
^C
FILE FUNC CALLS
func_abc.php func_a 1
func_abc.php func_b 1
func_abc.php func_c 1
func_abc.php sleep 3

The above output showed that sleep() was called three times, and the user defined functions were each called once - as we would have assumed from the source. With DTrace you can check your assumptions through tracing, which is often a useful excercise. I've solved many performance issues by examining areas that seem almost too obvious to bother checking.

The following script is both for general and performance observability. It prints function flow along with delta times,
# php_flowtime.d
C TIME(us) FILE DELTA(us) -- FUNC
0 3646108339057 func_abc.php 9 -> func_a
0 3646108339090 func_abc.php 32 -> sleep
0 3646109341043 func_abc.php 1001953 <- sleep
0 3646109341074 func_abc.php 31 -> func_b
0 3646109341098 func_abc.php 23 -> sleep
0 3646110350712 func_abc.php 1009614 <- sleep
0 3646110350745 func_abc.php 32 -> func_c
0 3646110350768 func_abc.php 23 -> sleep
0 3646111362323 func_abc.php 1011554 <- sleep
0 3646111362351 func_abc.php 27 <- func_c
0 3646111362361 func_abc.php 10 <- func_b
0 3646111362370 func_abc.php 9 <- func_a
^C

Delta times, in this case, show the time from the line on which they appear to the previous line. The large delta times visible show that from when the sleep() function completed to when it began, took around 1.0 seconds of elapsed time. As we'd expect from the source. Both CPU (C) and time since boot (TIME(us)) are printed in case the output is shuffled on multi-CPU systems, and requires post sorting.

A more consise way to examine elapsed times may be php_calltime.d, a script that can be useful for performance analysis,
# php_calltime.d
Tracing... Hit Ctrl-C to end.
^C

Count,
FILE TYPE NAME COUNT
func_abc.php func func_a 1
func_abc.php func func_b 1
func_abc.php func func_c 1
func_abc.php func sleep 3
- total - 6

Exclusive function elapsed times (us),
FILE TYPE NAME TOTAL
func_abc.php func func_c 330
func_abc.php func func_b 367
func_abc.php func func_a 418
func_abc.php func sleep 3025644
- total - 3026761

Inclusive function elapsed times (us),
FILE TYPE NAME TOTAL
func_abc.php func func_c 1010119
func_abc.php func func_b 2020118
func_abc.php func sleep 3025644
func_abc.php func func_a 3026761

The exclusive function elapsed times show which functions cused the latency - which identifies sleep() clocking a total of 3.0 seconds (as it was called three times). The inclusive times show which higher level functions the latency occured in, the largest being func_a() since every other function were called within it. There isn't a total line printed for inclusive times - since it wouldn't make much sense to do so (that column sums to 9 seconds, which is more confusing than of actual use).

As a starting point for deeper analysis, the following script adds syscall tracing to the PHP function flows, and adds terminal escape colours (see Notes/ALLcolors_notes.txt for notes on using colors with DTrace),
# php_syscolors.d
C PID/TID DELTA(us) FILE:LINE TYPE -- NAME
0 18426/1 8 func_abc.php:22 func -> func_a
0 18426/1 41 func_abc.php:18 func -> sleep
0 18426/1 15 ":- syscall -> nanosleep
0 18426/1 1008700 ":- syscall <- nanosleep
0 18426/1 30 func_abc.php:18 func <- sleep
0 18426/1 42 func_abc.php:19 func -> func_b
0 18426/1 28 func_abc.php:11 func -> sleep
0 18426/1 14 ":- syscall -> nanosleep
0 18426/1 1010083 ":- syscall <- nanosleep
0 18426/1 29 func_abc.php:11 func <- sleep
0 18426/1 43 func_abc.php:12 func -> func_c
0 18426/1 28 func_abc.php:5 func -> sleep
0 18426/1 14 ":- syscall -> nanosleep
0 18426/1 1009794 ":- syscall <- nanosleep
0 18426/1 28 func_abc.php:5 func <- sleep
0 18426/1 34 func_abc.php:6 func <- func_c
0 18426/1 18 func_abc.php:13 func <- func_b
0 18426/1 17 func_abc.php:20 func <- func_a
0 18426/1 21 ":- syscall -> fchdir
0 18426/1 19 ":- syscall <- fchdir
0 18426/1 9 ":- syscall -> close
0 18426/1 13 ":- syscall <- close
0 18426/1 35 ":- syscall -> semsys
0 18426/1 12 ":- syscall <- semsys
0 18426/1 7 ":- syscall -> semsys
0 18426/1 7 ":- syscall <- semsys
0 18426/1 66 ":- syscall -> setitimer
0 18426/1 8 ":- syscall <- setitimer
0 18426/1 39 ":- syscall -> read
0 18426/1 14 ":- syscall <- read
0 18426/1 11 ":- syscall -> writev
0 18426/1 22 ":- syscall <- writev
0 18426/1 23 ":- syscall -> write
0 18426/1 110 ":- syscall <- write
0 18426/1 61 ":- syscall -> pollsys

From that output we can see that PHP's sleep() is implemented by the syscall nanosleep(), and that it appears that writes are buffered and written later.

Understanding the language scripts

The scripts are potentially confusing for a couple of reasons; understanding these will help explain how best to use these scripts.

Firstly, some of the scripts print out an event "TYPE" field (eg, "syscall" or "func"), and yet the script only traces one type of event (eg, "func"). Why print a field that will always contain the same data? Sounds like it might be a copy-n-paste error. This is in fact deliberate so that the scripts form better templates for expansion, and was something I learnt while using the scripts to analyse a variety of problems. I would frequently have a script that measured functions and wanted to add syscalls, library calls or kernel events, to customise that script for a particular problem. When adding these other event types, it was much easier (often trivial) if the script already had a "TYPE" field. So now most of them do.

Also, there are variants of scripts that provide very similar tracing at different depths of detail. Why not one script that does everything, with the most detail? Here I'll use the Tcl provider to explain why,

The tcl_proccalls.d script counts procedure calls, here it is tracing the Code/Tcl/func_slow.tcl program in the toolkit, which calls loops within functions:
# tcl_proccalls.d
Tracing... Hit Ctrl-C to end.
^C
PID COUNT PROCEDURE
183083 1 func_a
183083 1 func_b
183083 1 func_c
183083 1 tclInit

Ok, that's a very high level view of Tcl activity. There is also tcl_calls.d to count Tcl commands as well as procedures:
# tcl_calls.d
Tracing... Hit Ctrl-C to end.
^C
PID TYPE NAME COUNT
[...]
183086 proc func_a 1
183086 proc func_b 1
183086 proc func_c 1
183086 proc tclInit 1
[...]
183086 cmd if 8
183086 cmd info 11
183086 cmd file 12
183086 cmd proc 12

(Output truncated). Now we have a deeper view of Tcl operation. Finally there is tcl_ins.d to count Tcl instructions:
# tcl_ins.d
Tracing... Hit Ctrl-C to end.
^C
PID TYPE NAME COUNT
[...]
16005 inst jump1 14
16005 inst pop 18
16005 inst invokeStk1 53
16005 inst add 600000
16005 inst concat1 600000
16005 inst exprStk 600000
16005 inst lt 600007
16005 inst storeScalar1 600016
16005 inst done 600021
16005 inst loadScalar1 1200020
16005 inst push1 4200193

There are such large counts since this script is tracing the low level instructions used to execute the loops in this code.

So the question is: why three scripts and not one that counts everything? Why not tcl_megacount.d? There are at least three reasons behind this.

1. You don't pull out a microscope if you are looking for your umbrella. Tracing everything can really bog you down in detail, when higher level views are frequently sufficient for solving issues. It's a matter of the right tool for the job. Try the higher level scripts first, then dig deeper as needed.

2. Measuring at such a low level can have a significant performance impact. I'll demonstrate this by using ptime as a easy (but rough) measurement of run time for the Code/Tcl/func_slow.tcl sample program.

Here is func_slow.tcl running with no DTrace enabled as a baseline:
# ptime ./func_slow.tcl > /dev/null

real 3.306
user 3.276
sys 0.005

Now while tcl_proccalls.d is tracing procedure calls:
# ptime ./func_slow.tcl > /dev/null

real 3.311
user 3.270
sys 0.006

Now while tcl_calls.d is tracing both Tcl procedures and commands:
#  ptime ./func_slow.tcl > /dev/null

real 3.313
user 3.283
sys 0.006

Finally with tcl_ins.d tracing Tcl instructions:
# ptime ./func_slow.tcl > /dev/null

real 20.438
user 20.278
sys 0.010

Ow! Our target application is running 7 times slower? This isn't supposed to happen, is there something wrong with DTrace or the Tcl provider? Well, there isn't, the problem is that of tracing 9 million events during a 3 second program. The overheads of DTrace are miniscule, however if you multiply them (or anything) by ridiculously large numbers they'll eventually become measurable. 9 million events is a lot of events - and you wouldn't normally care about tracing Tcl instructions anyway. It's great to have this capability, if it is ever needed.

3. Example programs. Writing mega powerful DTrace scripts often means mega confusing scripts to read. People use the DTraceToolkit as a collection of example code to learn from, so keeping things simple if possible, should help.

Documentation

It usually takes more time to document a script than it takes to write it. I feel strongly about creating quality IT documentation, so I'm reluctant to skip this step. The strategy used in the toolkit is, by subdirectory name,
  • Scripts - these have a standard header plus inline comments.
  • Examples - this is a directory of example files that demonstrate each script, providing screenshots and interpretations of the output. Particular attention has been made to explain relevant details, especially caveats, no matter how obvious they may seem.
  • Notes - this is a directory of files that comment on a variety of topics that are relevant to multiple scripts. There are many more notes files in this toolkit release.
  • Man pages - there is a directory for these, which are useful for looking up field definitions and units. To be honest, I frequently use the Examples directory for reminders on what each tool does rather than the Man pages.

Since there were over 100 new scripts in this release, over 100 new example files needed to be written. Fortunately I had help from Claire Black (my wife), who as an ex-SysAdmin and Solaris Instructor had an excellent background for what people needed to know and how to express it. Unfortunately for her, she only had about 4 days from when I completed these scripts to when the documentation was due for inclusion on the next OpenSolaris Starter Kit disc. Despite the time constraint she has done a great job, and there is always the next release for her to spend more time fleshing out the examples further. I'm also aiming to collect more real-world example outputs for the next release (eg, Mediawiki analysis for the PHP scripts).

Testing

Stefan Parvu has helped out again by running every script on a variety of platforms, which is especially important now that I don't have a SPARC server at the moment. He has posted the results on the DTraceToolkit Testing Room, and did find a few bugs which I fixed before release.

Personal

It has been about 18 months since the last release. In case anyone was wondering, there are several reasons why, briefly: I moved to the US, began a new job in Sun engineering, left most of my development servers behind in Australia, brought the DTraceToolkit dev environment over on a SPARC UFS harddisk (which I still can't read), and got hung up on what to do with the fragile tcp* scripts. The main reason would be having less spare time as I learnt the ways of Sun engineering.

I also have a Sun blog now, here. I'm not announcing the DTraceToolkit there since it isn't a Sun project; it's a spare time project by myself and others of the OpenSolaris community.

Future

The theme for the next release may be storage: I'd like to spend some time seeing what can be done for ZFS, for iSCSI (using the new provider), and NFS.