15 May, 2015

The DTraceToolkit Project Has Ended

Ten years ago on this day I created the DTraceToolkit, and it's time to call the project ended. Its scripts live on in different operating systems: OS X, FreeBSD, Oracle Solaris 11, and other Solaris derivatives; as packages for OmniOS and SmartOS; and integrated into other tools. Thanks to everyone who helped make it a success.

I documented its origin in the History file:

$ more DTraceToolkit-0.99/Docs/History
------------------------------------------------------------------------------
20-Apr-2005     Brendan Gregg   Idea
        For a while I had thought that a DTrace toolkit would be a nice 
        idea, but on this day it became clear. I was explaining DTrace to 
        an SSE from Sun (Canberra, Australia), who had a need for using 
        DTrace but didn't have the time to sit down and write all the
        tools he was after. It simply made sense to have a DTrace toolkit
        that people could download or carry around a copy to use. Some
        people would write DTrace tools, others would use the toolkit.
------------------------------------------------------------------------------
15-May-2005     Brendan Gregg   Version 0.30
        I had discussed the idea of a DTrace toolkit with the Sun PAE guys in 
        Adelaide, Australia. It was making more sense now. It would be much
        like the SE Toolkit, not just due to the large number of sample 
        scripts provided, but also due to the role it would play: few people
        wrote SE Toolkit programs, more people used it as a toolkit. While
        we would like a majority of Solaris users to write DTrace scripts, 
        the reality is that many would want to use a prewritten toolkit.
        Today I created the toolkit as version 0.30, with 11 main directories,
        a dozen scripts, man pages and a structure for documentation.
...

Back in 2005, the DTraceToolkit was a collection of robust performance tools for a single OS and kernel, providing advanced performance insight beyond the typical Unix toolset. I've had countless emails from sysadmins and developers who have used it to solve performance issues in production, and wanted to say thanks. I've appreciated the kind words!

Today, in 2015, the 230-script DTraceToolkit is more like a large collection of ancient kernel patches that, when they do work, often do so by sheer luck. I didn't know in 2005 that DTrace would appear on other operating systems, and that each kernel would change as much as it did. In particular there were numerous changes to the DTrace syscall provider, which caused the scripts to be more tied to kernel versions than expected.

To put this all in today's terms: the DTraceToolkit became like a set of 230 amazing Linux 2.6.11 patches that people want working on Linux 3.2, 3.13, and 4.0, and FreeBSD 10.0, 11.0, and Oracle Solaris 11! Such a feat isn't impossible, in the strictest sense of the word, but it is impractical.

In 2013 I saw how to fix this, while keeping the DTraceToolkit as a central collection of scripts. The trick was realizing that there were two audiences with different requirements, who could be served by having two different collections of tools:

  • A toolkit of working and maintained tools for everyone to use. It should be easy to learn, providing simple Unix-like tools with man pages and example files. It should also provide the fewest tools possible (fewer than twenty), to make it easier to learn, browse, and maintain.
  • A toolshed of in-development or unmaintained tools for performance engineers to browse. This would be a large library of hundreds of scripts, most of which won't work on any given kernel. These serve as ideas, suggestions, and starting points for performance analysis, and can be fixed when needed. (In a way, the DTrace book serves as this; its scripts are on dtracebook.com and github.)

I planned to do this split, and started explaining it in my 2013 post on DTraceToolkit 0.xx Mistakes. It would be a lot of work. I was primarily interested in helping the toolkit users, but I had another minor motivation: to give a DTraceToolkit talk at the USENIX/LISA conference – a dream I'd had for years. Despite giving many talks, and including the DTraceToolkit in some of them, I'd never given a canonical DTraceToolkit talk.

However, it was not to be. In March 2014 I stopped working on Solaris or any of its derivatives, and accepted a job to work primarily on Linux performance. While the DTraceToolkit was always a spare time project, I have to admit that it's no longer a priority, and hasn't been for years. There are new and exciting things in tech to work on (including Linux eBPF), and which are more related to my day job and career going forward.

I still use some DTrace ... on FreeBSD, which is also in use at my new job. And last year I released a new set of (toolshed-like) tools for FreeBSD: the DTrace-tools collection.

I expected my new job to be the most challenging of my career, and it has been. Early on I desperately missed the DTraceToolkit while on Linux, as well as my other DTrace tools. But I've been writing new ones, out of necessity, based on Linux ftrace, perf_events, SystemTap, and eBPF. I'm making progress, script by script, bringing the observability I need to Linux, and sharing these scripts online.

I gave my LISA talk after all, in 2014, but it was titled Linux Performance Analysis: New Tools and Old Secrets. This was about my Linux ftrace and perf_events-based tools: perf-tools, which are inspired by my own DTraceToolkit. It includes multi-tools like funcccount and kprobe, which, for me, make a giant difference. I can rapidly explore kernel behavior again.

A number of the DTraceToolkit scripts live on in different OSes, and I'm glad for that to happen: they have a life of their own. But I can't recommend anyone continue the DTraceToolkit project. If you want to understand more background as to why, then my mistakes blog post should be a good start.

I'm proud of what I accomplished with the DTraceToolkit, the DTrace book, and with Sun Solaris in the field of performance. Thank you Sun – you were awesome back in the day.

I'll be moving the DTraceToolkit into my Crypt, which has my other retired Solaris software. It's not easy to do this, but I think it's better to communicate bad news than none at all.

dtrace -n 'END { printf("The %stoolkit has %sED\n", probeprov, probename); }'

Probably the best outcome of my DTraceToolkit work was my DTrace book with Jim Mauro. I really think of it as the DTraceToolkit version 2.0.

PS. This is my last post to this blog, which mostly existed for DTraceToolkit updates. My new blog is here.

05 September, 2013

DTraceToolkit 0.XX Mistakes

You learn more from failure than you do success. In this post, I'd like to list my mistakes and failures from versions 0.01 to 0.99 of the DTraceToolkit, as lessons to learn from. There is much detail here that may only interest a small number of people. And if you are a recruiter, I can assure you that this won't interest you at all, so please stop reading. In fact, I'm another Brendan Gregg – the one who makes many mistakes – since there are several of us, after all. :-)

As a summary of the lessons learned, skip to the end for the "Learning From Mistakes" section.

Background

By 2005 I was sharing a small collection of DTrace scripts – tools – on my homepage. These were popular with a wide audience, including those who didn't have time to write DTrace scripts from scratch, and those who weren't programmers anyway. For these casual DTrace users I created a toolkit, to:

  • Give people useful tools to run immediately
  • Present good examples of DTrace to learn from

I was doing performance consulting and training, and had a keen sense of what problems needed solving. So creating useful tools was easy: I already had a laundry list of needs. As for good examples: I made my own coding style and stuck to it. I also created documentation for every script: a man page, and a file showing example usage. And I tested every script carefully.

The toolkit has been successful, helping people solve issues and learn DTrace. The scripts have also have been included, as tools, in multiple operating systems by default, including Mac OS X and Oracle Solaris.

Mistake 1. Missing Scripts

The observability coverage was a little uneven, as it was based on my performance experience at the time. Some areas I didn't write enough scripts for, and some areas I missed by mistake: for example, socket I/O from the syscall interface. I should have drawn a functional diagram of the system and kernel internals, to look for areas that were lacking scripts and observability.

I did cover the missing areas when I wrote numerous new scripts for the DTrace book, which are shared online as its own collection. But I haven't figured out how to include them in the DTraceToolkit, as they were created as example scripts (that likely need tweaking to run on different OSes) and not robust tools. So, now I have two collections of scripts, and over 400 scripts total.

Mistake 2. Too Many Scripts

I think I have too many scripts for the target audience: casual users. I'm reminded of the function/complexity ratio described in The Mythical Man Month, where the addition of some function came at the cost of much more complexity, making computers harder to learn and use. Casual users may only use a handful of the scripts, and having extra scripts around adds complexity to browse. This also becomes a maintenance burden, and testing can fall behind (it already has).

I did use a directory hierarchy to organize the scripts, with "top scripts" in the top level directory, which I think helps. I've also been thinking of creating a separate collection for performance engineers which contains every script (400+), and reducing the DTraceToolkit to 100 or less for the casual users.

Mistake 3. Inventing My Own Style

I should not have invented my own style to begin with. This included double asterisks for block comments, and no spaces after commas. From the original iosnoop:

#!/usr/sbin/dtrace -s
/*
** iosnoop.d - A program to print I/O events as they happen, with useful
** details such as UID, PID, inode, command, etc. 
** Written in DTrace (Solaris 10 build 51).
**
** 29-Mar-2004, ver 0.60.  (check for newer versions)
[...]

At the time I thought it was neat. I now think it looks bad. After about 50 scripts, someone from Sun suggested I follow "cstyle", as that was the standard for Sun's C code. This seemed to be a better idea than my own invented style, but, I had already written 50 scripts! I had to rewrite all the scripts to be cstyled - a nusiance. I should have asked others in the DTrace community before creating my own style, as many would have made the same suggestion: use cstyle.

Mistake 4. Complex scripts

Some of the scripts are too long and too complicated. These make poor examples to learn DTrace from, which was one of the goals of the toolkit. They also are a pain to maintain. In some cases it was necessary, since the point of the script was to resemble a typical Unix *stat tool, and therefore needed shell wrapping (getopts). Eg, iosnoop, iotop, and execsnoop. But in other cases it wasn't necessary, like with tcpsnoop.d.

One reason tcpsnoop.d is complex is the provider it uses (next mistake), but another reason was the aim. I wanted a tool that emitted the exact same output as snoop (Solaris tcpdump), but decorated with a PID column and other kernel context that can't be seen on the wire. This turned out to be very complex without a stable tcp provider to use, especially correctly matching the TCP handshake packets, RST packets from closed port SYNs, and other TCP behavior. I should have stopped and gone back to the problems this would solve, primarily, identifying TCP sessions to their PID, and quantifying their workload. That could have been solved with tracing send and receive packets alone. Other objectives, like tracing closed port packets, should have been handled by separate scripts. Doing everything at once was ideal, but not practical at the time.

The DTrace book taught me discipline for creating short, simple, and useful scripts, that fit on half a textbook page. I think that's a better approach where possible, which may mean several small scripts for specific problems, instead of one long script. That will help with readability, maintenance, and testing.

Mistake 5. The fbt Provider

I suspected I was making this mistake at the time, and I was. The function boundary tracing (fbt) provider does dynamic tracing of kernel functions, arguments, return values, and data structures. While the observability is amazing, any script that is fbt-based is tied to a kernel version (the code it instruments), and may stop working after an upgrade or a patch. This happened with tcpsnoop.d and tcptop, complex scripts that instrumented many TCP internals. An excerpt:

/*
 * TCP Fetch "port closed" ports
 */
fbt:ip:tcp_xchg:entry
/self->reset/
{
#if defined(_BIG_ENDIAN)
 self->lport = (uint16_t)arg0;
 self->fport = (uint16_t)arg1;
#else
[...]

The function this traces, tcp_xchg(), was removed in a kernel update. This broke tcpsnoop.d. The sad part is that this code was only necessary for tracing closed port packets (RST), and most of the time people weren't using tcpsnoop.d for that. See the earlier mistake. Had this been separate scripts, the breakage would be isolated, and, easier to fix and test.

I stopped creating fbt-based TCP scripts after tcpsnoop and tcptop, waiting for a stable DTrace tcp provider to be developed (which I ended up developing). I think it was useful to have them there, even though they usually didn't work, because it has been important to show that DTrace can do kernel-level TCP tracing. See my other post on the value of broken tools.

Mistake 6. The syscall Provider

I knew this mistake was theoretically possible, but never expected it would actually happen. The DTrace syscall provider is technically an unstable interface, due to the way it instruments the kernel trap table implementation. The DTrace manual mentioned the couple of places where this mattered, as probe names differed from syscall names. So any script that traces syscalls could break after a kernel upgrade, if the syscall trap table interface changed. Well, I'd expect syscalls to be added, but I didn't expect much changes to the existing names.

Oracle Solaris 11 went ahead and changed the syscall trap table significantly. This was to slightly simplify the code (housekeeping), making it a little easier for the few kernel engineers who maintain it. A side effect is that it broke many DTrace one-liners and scripts that customers were using. Taking that into consideration, I think this change was a mistake. It's adding more complexity for customers to endure than the function it provides (see the Mythical Man Month). In one use case, customers on Oracle Solaris 11 must include the birthday of an engineer in their scripts (a magic number) in order to have similar functionality as before. This is just madness.

Who's mistake is this ultimately? Everyone's. The DTrace syscall provider implementation was unstable to begin with, and while not a big problem at the time, you could argue that it should have been implemented stable from the very beginning. Perhaps I should not have used it so much, although without a stable alternative, much of the value of the DTraceToolkit would be lost. Avoiding it would also have made DTrace adoption more difficult, as tutorials usually begin by tracing the well-understood system calls. But the biggest mistake may be Oracle making these changes without also fixing the syscall provider. They did fix DTraceToolkit scripts and shipped them with Oracle Solaris 11, but much of the existing documentation for DTrace has not been fixed, nor is it easy to. All documentation needs to note that Oracle Solaris 11 is a special case, where the trap table differences are so large you do need to learn them. Fail.

Mistake 7. Scripts That Build Scripts

I shouldn't have written complex scripts that generate DTrace scripts. I did this once, with errinfo, a Perl program that builds and executes DTrace as a co-process. After finishing that I realized it was a bad idea, and did no more. It makes the script a difficult example to learn from and difficult to maintain. What I did do for many scripts (iosnoop, opensnoop, etc), was to break the program in two halves: the top half is a shell script for processing options, and the bottom half is the D script. This kept it simple, and maintenance wasn't too difficult.

Mistake 8. The Names iosnoop and iotop

I should have called these disksnoop and disktop, to make it clear that they are tracing physical I/O and not logical I/O (syscalls). It's confused a lot of people, and I'm sorry. I should have picked better names. iotop has since been ported to Linux (more than once), where the name has been kept, so I've indirectly confused those users as well.

Mistake 9. Not Testing iosnoop With High Enough Load

I didn't test iosnoop with a high enough disk IOPS rate. My development servers had a limited number of disks, where it ran fine. But on large scale servers, people reported "dynamic variable drops". The fix was to increase the dynvarsize tunable, which I have done in the next version (comes after the "switchrate" line):

 /* boost the following if you get "dynamic variable drops" */
 #pragma D option dynvarsize=16m

I could also change the key to the associative arrays to reduce overhead. It is currently the device and block number (this->dev, this->blk), but using the pre-translated message pointer (arg0) also works for many if not all OSes. The down side is that using arg0 is an unstable interface, which I'd rather avoid.

Mistake 10. Using io:genunix::start

I should not have used this probe at all, or, I should have added a comment to explain why I was. io:::start traces all disk I/O from the block device interface, and, NFS client I/O. Specifying "genunix" as the module name matched the disk I/O only, which is what I wanted, as NFS I/O was from a different module. But that's an unstable way to do it, and these scripts didn't work on other OSes that didn't have "genunix" as the module name. In later versions of my scripts, I removed the genunix, which means they also match NFS I/O.

I explained this in the DTrace book, and mentioned the stable fix: using io:::start with the predicate /args[1]->dev_ name != "nfs"/.

Mistake 11. Using Global Associative Arrays

These can become corrupted, and did for the cputimes script. This one in particular:

        /* determine the name for this thread */
        program[cpu] = pid == 0 ? idle[cpu] ? "IDLE" : "KERNEL" :
            OPT_all ? execname : "PROCESS";

That's saving a string to a global associative array, program[], which is keyed on the CPU id. Unlike aggregations or thread-local variables, these are not multi-CPU safe. cputimes would sometimes print strings that were garbled. Fortunately this wasn't subtle corruption, but was pretty obvious.

Mistake 12. dapptrace

I should have warned about the overheads of dapptrace more clearly. With the -a option, it traces every user-level function, which for some applications will significantly slow the target. Without -a, it traces just the application text segment, which, depending on the application, can also incur high overhead. I warned about it in the example file, but I should have warned about it in the script and man page as well.

Maybe I shouldn't have even written this tool. With applications, I'm usually tracing a specific function or group of functions, which minimizes the overhead.

Mistake 13. Missing Units On Output

I should have always put units on the output. For example, running iosnoop with completion times (-t, for the TIME column) and I/O delta times (start to done, -D, for the DELTA column):

# iosnoop -Dt
TIME           DELTA     UID  PID D    BLOCK   SIZE      COMM PATHNAME
633389165137   1903        0    1 W 749429752   4096   launchd ??/T/etilqs_sa...
633389165187   1461        0    1 W 385608448   4096   launchd ??/log/opendir...
633389165215   2018        0    1 W 749429656   8192   launchd ??/T/etilqs_sa...
[...]

Where they microseconds or milliseconds? I forget. While the units are documented in the script, the USAGE message, and the man page, it's really handy to have it on the output as well. The next version does this:

# iosnoop -Dt
TIME(us)       DELTA(us) UID  PID D    BLOCK   SIZE      COMM PATHNAME
[...]

All scripts should include units on the output.

Mistake 14. Capital Directories

I'm not sure starting directories with a capital letter was a great idea. It helped differentiate the scripts from the directories, and listed directories first. eg:

~/DTraceToolkit-0.99> ls
Apps            Java            Perl            User            install     
Bin             JavaScript      Php             Version         iopattern   
Code            Kernel          Proc            Zones           iosnoop     
Cpu             License         Python          dexplorer       iotop       
Disk            Locks           README          dtruss          opensnoop   
Docs            Man             Ruby            dvmstat         procsystime 
Examples        Mem             Shell           errinfo         rwsnoop     
FS              Misc            Snippits        execsnoop       rwtop       
Guide           Net             System          hotkernel       statsnoop   
Include         Notes           Tcl             hotuser                      

Well, that seems pretty useful. But then, so is aliasing ls to "ls -F", or the colorized version of ls. Those techniques won't list the directories first, but they will differentiate them. This may be a choice between having directories listed first and together, or, not needing to hit the shift key so much. Or, I rearrange things to not mix scripts with directories.

Mistake 15. Not Organizing For Other OSes or Kernel Versions

I didn't think much about organizing the toolkit to support multiple OSes, as those ports hadn't happened yet. I should have thought harder, since around the same time DTrace was being released as open source, and I honestly expected it to be on Linux by the end of 2005. All I did do was put the OS type and version at the top of the scripts (better than nothing). Organizing the hierarchy is a problem I need to wrestle with now.

Mistake 16. Not Testing New Kernels

I didn't have a test plan for handling numerous new kernel versions. In part because I didn't yet know how much of a problem it would be, but also because I didn't expect the DTraceToolkit to be around for that long (see next mistake). When I began writing the DTraceToolkit, there was only one kernel version to test. Soon there were two, three, and then more than I had servers at home to test on. Pretty quickly people were running the DTraceToolkit on Solaris versions that I hadn't tested at all, and I didn't have the spare equipment to test.

I did get a lot of help from Stephan Parvu, who automated testing and was able to quickly do a smoke test on several kernel versions and platforms. But as the years went by, Sun kept releasing new Solaris versions, and staying on top was hard. There have been 12 versions of Solaris 10 so far (last was Jan this year), on two different platforms (x86, SPARC), meaning there are 24 OS versions to test just for Solaris 10 coverage.

One problem was that I didn't have ready access to the different versions of Solaris to test: ideally, I'd want 24 online servers spanning every version of Solaris 10, and root logins. At one point someone at Sun was going to contribute a pool of test servers to the OpenSolaris project. Unfortunately, I had to sign the OpenSolaris contributor agreement before I could use them. I didn't, and that turned out later to be a good decision.

The other problem was that testing was very time consuming, and was most of the script development time. The smaller scripts would take about 2 hours to develop: 20 minutes to write the script, 20 minutes to write the example file and man page, and then 80 minutes to test it and fix bugs found during testing. Larger scripts took longer - some took weeks. It's easy to write a DTrace script that produces numbers, but it's harder produce accurate numbers. Testing often involved running a variety of "known workloads" and then seeing if the DTrace tool measured the workloads exactly, and investigating when it didn't.

Testing is even harder now than before, as there are multiple OSes to test.

Mistake 17. Not Planning For Success

I didn't anticipate the DTraceToolkit would be this successful, and that it would be used years later on multiple OSes. I half expected Sun to release some new DTrace-related thing, like a DTrace GUI, that would make the toolkit redundant. If I had expected it to last this long, I could have planned testing kernel versions and multiple OSes better. I guess this is a lesson for any project: what if it is really successful in the long term? What architectural choices should be made now, that won't be regretted later?

Mistake 18. Private Development and Testing

I should have made the development version of the DTraceToolkit public, even though it contained partially tested scripts. This would have allowed others to easily participate in testing these scripts, accelerating their development. Some people did help test, but this required me to email around tarballs, when I could have just had a public URL. It didn't seem like a good idea in 2005 when I created the toolkit, but nowadays it is commonplace to find both stable and unstable versions of a project online.

Mistake 19. Soliciting Contributions From Beginners

I encouraged everyone to send me scripts, which was a mistake. Most of the submissions were from DTrace beginners, some noting "this is my first ever DTrace script". Many of these produced invalid or misleading metrics. Most of the beginners were also inexperienced programmers, who didn't test or know how to test. And many didn't follow the toolkit coding style, or any programming style. I was sympathetic as they did have good intentions, and they were trying to help my project. So I would try to explain how to improve or test their script. At one point I also wrote websites on style, dos and don'ts, and hints & tips.

Problem was, most beginners didn't really understand what I was talking about, unless I spent serious time explaining. Since they had already contributed their time, some felt that I was obligated to return the favor, and explain, and keep explaining, until they understood everything. This took hours, many more than it would take to write the scripts myself. And that time didn't pay off: once the beginner realized I was serious about accuracy and testing, or the real complexity of what they were doing, they usually lost interest.

A common misconception with beginners was that the hard work was in writing the script, and that any extra work, like testing and documentation, was minor and could just be done by me. Testing is the hard work, and is where most of the development time is spent.

I should have only encouraged experienced software engineers. Some of whom did send me scripts, which I included. These were often accurate, tested, and styled to begin with. They were usually created when the engineer was debugging a particular software issue, and had created a DTrace script to solve it after reading the source code and learning all the nuances.

Mistake 20. Joining Sun

I joined Sun in late 2006, and in terms of the DTraceToolkit, this was a mistake. My work on the DTraceToolkit mostly stopped, and not just because I was busier. It's not easy or short to explain why. There are a number of pros and cons when working on an open source project like this, and before Sun it seemed that the pros outweighed the cons. Pros included things like helping fellow sysadmins – which I really enjoyed – and creating tools that I'd use in my own career. But there were various cons too, and Sun added more, to the point where it was hard to justify volunteering my spare time for this project.

Just as an example – and only because it's the easiest to summarize – at one point Sun legal became involved, and investigated whether Sun could claim ownership of the DTraceToolkit. As an employee, I was now in the crosshairs of my own company's lawyers. It wasn't fun, and they tried, very hard, for months. Thanks again to those that helped eventually put a stop to that!

I don't have hard feelings towards the lawyers. In fact, I admire how hard they were working for Sun – just like I was, so on some level it was understandable. But other events were much less understandable, and much worse. Again, there's no easy or short way to explain it all, and the problems weren't limited to the DTraceToolkit - DTrace itself came under friendly fire at Sun as well.

Instead of working on the DTraceToolkit I've been writing books, which have similar incentives: I get to help other people in the industry, especially sysadmins, as well as create references for myself, which also include tools. I never stopped contributing - I changed the medium to do so.

Learning From Mistakes

I need to:

  • Get the DTraceToolkit on github.
  • Organize it to support different OSes.
  • Update existing scripts and add missing scripts.
  • Split the DTraceToolkit into two: a small number of well-tested scripts for casual users, and a large number of less-tested scripts for performance engineers.

And by less-tested I do mean probably broken. This is embracing the reality of dynamic tracing, which exposes interfaces that are not stable. That doesn't make the scripts useless, rather, these scripts need to be treated differently: they are examples of solving problems (even if they don't work), not ready-to-go tools.

The split can be either in the DTraceToolkit as subdirectories, or as separate toolkits. This addresses several of the mistakes listed above. I hope to follow up this post with another explaining how the split is done, which can refer to this post as background.

And the final lesson I should learn: I should stop writing books, to get my spare time back! (My colleagues wouldn't believe that for a second.) Ok, well, maybe I'll a break from writing very long books, how about that? :-)


17 July, 2013

Systems Performance: Enterprise and the Cloud

I wrote another book, Systems Performance: Enterprise and the Cloud. It will be out this year, and I'm really looking forward to it helping people improve performance of their systems. While the book has been drafted, I'm not sure how many final pages it will be, as it still needs to finish the composition and layout stages of publication.

All my spare time has been consumed with it (and before it, the previous DTrace book), which is why this blog has been quiet for many years.

31 December, 2011

Still Writing

I haven't posted here in a few years, but I have been busy writing material, particularly for:
  • The DTrace book: which has over 1,000 pages, much of it new content. This ate over a year of my spare time (and Jim's).

  • dtrace.org/blogs/brendan: my professional blog, for posts related to my work (although it's still mostly a spare time project). This was formerly on blogs.oracle.com/brendan, and before that on blogs.sun.com/brendan.
For more of my recent writing, I've updated a summary under the Documentation section on my homepage, which includes posts from my dtrace.org blog and other places.

I'll post more here in the coming year: this blog is for purely personal posts and projects (like the DTraceToolkit).

24 June, 2008

DTrace in New York

Back in February I gave several DTrace talks in New York, including one at the New York OpenSolaris User Group meeting (NYCOSUG). I used an updated slide deck and was asked to put the PDF on my blog; I think Isaac must have beaten me to it and put it here (thanks!). I did intend to blog about this in case anyone was looking - sorry for the delay.

The NYCOSUG had a good turn out and asked some great questions, allowing me to deviate from the prepared slides and cover other things of interest (which is the value of an in-person presentation.) After the presentation I realised there was one point I could have explained better, which would make an interesting blog post.

I started with the following simple demos - the point of these is to build on something commonly understood (such as the behaviour of fork() and exec()), to introduce something new - DTrace.

Tracing exec():
# dtrace -n 'syscall::exec*: { trace(execname); }'
dtrace: description 'syscall::exec*: ' matched 4 probes
CPU ID FUNCTION:NAME
0 98087 exece:entry bash
0 98088 exece:return ls
^C

In the above output, we traced an exece() system call - printing the current process name when we entered and returned from that system call. That process name changed from "bash" to "ls" (I executed "ls -l" in another window), which is what exec() does - replaces the current process image with another.

While unsuprising, the significance is that we are able to dymanically trace this kernel activity whenever we would like, along with thousands of other kernel events. I could, for example, trace the time taken for exec() to execute; or the exit status when exec() returned along with the error code; I can also trace the internal operation of exec() with enough detail to fill hundreds of pages (I just counted 47396 lines of output when tracing every kernel function entry and return during exec()).

Now to trace fork():
# dtrace -n 'syscall::fork*: { trace(pid); }'
dtrace: description 'syscall::fork*: ' matched 6 probes
CPU ID FUNCTION:NAME
0 98227 forksys:entry 87417
0 98228 forksys:return 90769
0 98228 forksys:return 87417
^C

The above system call has one entry and two returns - which is what we expect from the fork() family.

Simple as this is, some interesting behaviour is already visible. Note that the parent returned before the child? On the fork() entry, the parent's PID is traced (87417), however the child PID (90769) appears first on return.

It's possible that the output could be shuffled due to how DTrace uses per-CPU buffers to minimise performance impact; to double check, add a "timestamp" column and post sort:

# dtrace -n 'syscall::fork*: { printf("%d %d", timestamp, pid); }' -o /tmp/out.dtrace
dtrace: description 'syscall::fork*: ' matched 6 probes
# sort -n +3 /tmp/out.dtrace
CPU ID FUNCTION:NAME
1 98227 forksys:entry 268361844462135 87417
1 98228 forksys:return 268361844960455 90968
0 98228 forksys:return 268361844965924 87417

I asked the audience - why does the child return from fork() before the parent? I added that this was a very difficult question!

Someone responded to say that this was how all operating systems worked - the parent process waits for the child to complete. I said I was just tracing fork() and the parent could be scheduled first - but deliberately isn't, and explained why. My answer left them confused - and it struck me afterwards that I should have explained this better.

Consider this:
# dtrace -n 'syscall::fork*:,syscall::wait*: { trace(pid); }'
dtrace: description 'syscall::fork*:,syscall::wait*: ' matched 10 probes
CPU ID FUNCTION:NAME
0 98227 forksys:entry 87417
0 98228 forksys:return 91088
0 98228 forksys:return 87417
0 98163 waitsys:entry 87417
0 98164 waitsys:return 87417
^C

In the above output we can see both fork() and wait(), and we can discuss behaviour such as the parent process waiting for the child to complete (since I was in a shell running foreground commands.)

But I was actually asking a much deeper question, that of thread scheduling immediately after the fork() system call, and before the parent has called wait(). Immediately after fork() you have two threads - which should go on-CPU first? The parent, so that it can get to wait() sooner and before the child may have exited? Or is there a reason to schedule the child first?

As DTrace shows, the child is getting scheduled first, and the reason is one of performance. The source code explains in uts/common/disp/ts.c :

/*
* Child is placed at back of dispatcher queue and parent gives
* up processor so that the child runs first after the fork.
* This allows the child immediately execing to break the multiple
* use of copy on write pages with no disk home. The parent will
* get to steal them back rather than uselessly copying them.
*/
static void
ts_forkret(kthread_t *t, kthread_t *ct)

The fork() system call creates a clone of the parent, but rather than copy all memory pages to a new address space (which would add significant latency during process creation), Solaris bumps a reference counter on those memory pages to remember that two process refer to the same data. If one writes to a memory page later, this triggers a "copy on write" to create a private writable copy for that process. This means that expensive memory copies are only performed when needed - or if needed. Since a child process is likely to call exec(), it is likely to simply drop many existing memory references for the new process image, so copying those bytes would have been wasted cycles anyway.

However if the parent is scheduled first - before the child has had a chance to exec() - then the parent may continue writing to its address space, triggering copy on writes. Then the child executes, calls exec(), and drops those newly copied pages anyway - which were copied in vain. To avoid this, the child is scheduled first - to call exec() as soon as possible, as described in the comment above.

I learned about this behaviour when reading Solaris Internals 1st edition; but that was a time before DTrace and OpenSolaris. It's great that we all can now both read the code, and use DTrace to see it in operation.

18 February, 2008

DTraceToolkit in MacOS X

Apple included DTrace in MacOS X 10.5 (Leopard), released in October 2007. It's great to have DTrace available in MacOS X for its powerful application and kernel performance analysis. To think that there is now another kernel we can examine using DTrace is exciting - it's like discovering a new planet in the solar system.

Apart from kernel analysis, DTrace also improves general usability by answering every day questions like: why are my disks rattling? or why does my browser keep hanging? Although, your average user may not write DTrace scripts to answer these questions themselves (it's better if they do), but instead use prewritten scripts.

MacOS X includes a collection of DTrace scripts in /usr/bin, mostly from the DTraceToolkit:
leopard# grep -l DTrace /usr/bin/*
/usr/bin/bitesize.d
/usr/bin/cpuwalk.d
/usr/bin/creatbyproc.d
/usr/bin/dappprof
/usr/bin/dapptrace
/usr/bin/diskhits
/usr/bin/dispqlen.d
/usr/bin/dtruss
/usr/bin/errinfo
/usr/bin/execsnoop
/usr/bin/fddist
/usr/bin/filebyproc.d
/usr/bin/hotspot.d
/usr/bin/httpdstat.d
/usr/bin/iofile.d
/usr/bin/iofileb.d
/usr/bin/iopattern
/usr/bin/iopending
/usr/bin/iosnoop
/usr/bin/iotop
/usr/bin/kill.d
/usr/bin/lastwords
/usr/bin/loads.d
/usr/bin/newproc.d
/usr/bin/opensnoop
/usr/bin/pathopens.d
/usr/bin/pidpersec.d
/usr/bin/plockstat
/usr/bin/priclass.d
/usr/bin/pridist.d
/usr/bin/procsystime
/usr/bin/runocc.d
/usr/bin/rwbypid.d
/usr/bin/rwbytype.d
/usr/bin/rwsnoop
/usr/bin/sampleproc
/usr/bin/seeksize.d
/usr/bin/setuids.d
/usr/bin/sigdist.d
/usr/bin/syscallbypid.d
/usr/bin/syscallbyproc.d
/usr/bin/syscallbysysc.d
/usr/bin/topsyscall
/usr/bin/topsysproc
/usr/bin/weblatency.d

That's 44 DTraceToolkit scripts, plus plockstat from Solaris 10. While the DTraceToolkit now has over 200 scripts, it makes sense to pick out the most useful scripts for inclusion in /usr/bin.

Popular scripts such as iosnoop can now be run by MacOS X users:

leopard# iosnoop
UID PID D BLOCK SIZE COMM PATHNAME
501 130 R 31987472 40960 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 7879952 8192 Terminal ??/SearchManager.nib/keyedobjects.nib
501 130 R 32132304 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132528 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32047696 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132592 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32131512 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32033296 12288 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32044488 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32045064 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32131344 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32048680 16384 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132544 8192 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32049296 12288 Terminal ??/dyld/dyld_shared_cache_i386
-1 0 W 32482848 86016 kernel_task ??/vm/swapfile2
-1 0 W 32483040 135168 kernel_task ??/vm/swapfile2
501 130 R 32044672 4096 Terminal ??/dyld/dyld_shared_cache_i386
501 130 R 32132656 12288 Terminal ??/dyld/dyld_shared_cache_i386
[...]

The man pages are conveniently included in /usr/share/man.

I had been making preperations in the latest DTraceToolkit (0.99) for MacOS X DTrace, such as putting an "OS" field into the man pages and figuring out how to support different versions of the same script (tcpsnoop_snv, etc). Hopefully many scripts will run on both Solaris and MacOS X (especially if they use stable providers), however I expect there will be some that are specific to each. Now that QNX DTrace also exists, there is additional need for identifying OS specifics in the DTraceToolkit.

It's been great news for DTrace, Sun and Apple - who have not only gained the best performance and debugging tool available, but also the existing DTrace community.

17 February, 2008

Browsable DTraceToolkit

Stefan Parvu has created browsable HTML versions of the DTraceToolkit on the DTT test page. See DTraceToolkit ver 0.99 to browse that version.

A goal of the DTraceToolkit is to provide documented examples of DTrace scripting, in addition to what is available in the DTrace Guide. However these examples have been reaching a limited audience of those who download, unzip, and browse through the text files.

Now that the DTraceToolkit is browsable online, its contents can be found by internet search engines. This should help people not only find examples of DTrace usage, but also solutions to some common observability problems.

There have been a few other items of DTraceToolkit news which I'll blog about soon. Please excuse my infrequent blog postings - I've been busy since joining Sun on a particular project which consumes most of my spare time. It will be worth it, which should be clear once I can start posting about it on my Sun blog.