The WorkshopOn Performance and Reliability (WOPR) 17 was held Oct 20-22, 2011 on thetheme of “Finding Bottlenecks”.  Beyondthe fact that this was a historic event in the sense that no other peerworkshop inspired by LAWST has convened thismany times.  Of course, as a co-founderof WOPR, I’m (somewhat unreasonably) proud of this accomplishment, but the factthat over the last 9 years so many folks have been so inspired by the communityand value of WOPR that they have been willing to volunteer their time to planand organize these events, their companies have been willing to donate meetingspace (and often food & goodies), and participants have been frequentlywilling to pay their own way (sometime taking vacation time) to attend makes 17events, one every 6 months, since WOPR 1 is a significant achievement – whetheror not my “founder’s pride” is justified.  🙂
As is the tradition of WOPR 20-25 folks, selected or invitedby the “content owner” (a.k.a. the person or team who chose the theme to beexplored this time) brought their personal experiences related to “FindingBottlenecks” to share and explore with one another.  Also as is the tradition, certain patternsand commonalities emerged as these experiences were described and discussed.Everyone has their own take, there are no official findings, and I’m not evengoing to pretend that I can attribute all the contributing experiences and/orconversations to my takeaways below.
  • Finding bottlenecks can be technicallychallenging, examples include: 
    •  Analyzing the test & the data is far fromstraight forward 
    •  The “most useful” tools to narrow down thebottleneck may not be available – forcing us to be technically “creative” towork around those roadblocks.   
  • Finding bottlenecks can be *very*socio-politically challenging, examples include:o   
    • Lack of Trust (e.g. “That’s not a bottleneck,that’s the tool!”)
    • Denial (e.g. “It’s not possible that’s relatedto my code!”)
    • Lack of cross-team collaboration (e.g. “No, youcan’t install that monitor on *our* system!”)
  • Sometimes human bottlenecks need to be resolvedbefore technical bottlenecks can be found. (e.g. Perf Team being redirected,resources being re-allocated, excessive micromanagement, etc.)
Some other topics that came up that were relevant andinteresting (such as the frequent discrepancy between tester/technical goals& business goals), but since these weren’t “on theme” we didn’t discussthese topics deeply enough for me to draw any conclusions other than “thepoints and positions that did come up were consistent with what I would haveanticipated if I’d thought about it in advance”, which, for me, is a niceconfirmation.
My point in sharing these thoughts on finding bottlenecks isso that all the folks out there who feel like theirs is the only organizationthat is thwarted by socio-political challenges even more than technical onescan realize that they really aren’t alone.
The findings of WOPR17 are the result of the collectiveeffort of the workshop participants: AJ Alhait, Scott Barber, Goranka Bjedov,Jeremy Brown, Dan Downing, Craig Fuget, Dawn Haynes, Doug Hoffman, PaulHolland, Pam Holt, Ed King, Richard Leeke, Yury Makedonov, Emily Maslyn, GregMcNelly, John Meza, Blaine Morgan, Mimi Niemiller, Eric Proegler, RaymondRivest, Bob Sklar, Roland Stens, and Nishi Uppal.


Scott Barber
Chief Technologist, PerfTestPlus, Inc.

Co-Author, Performance Testing Guidance for Web Applications
Author, Web Load Testing for Dummies
Contributing Author, Beautiful Testing, and How To Reduce the Cost of Testing

“If you can see it in your mind…
     you will find it in your life.”