I want to give a shout out to the folks at the Sheraton that have been fantastic dealing with logistics, food, drink breaks, etc. Seriously, they have been fantastic. Also, I want to thank everyone who has hung around for the amazing conversations that go on each night afterwards. The plus side, I learn so many interesting things from various perspectives. The down side is that I am going to bed way too late this week ;).
We’re about to get ready for the second keynote, so I’ll be back shortly.
Christine is describing the personalities of people based on single words. The who people, the what people, the why people, the how people. If you want to get what you want, you have to realize that each of these people has their own language and their own way of dealing with things. If you want to get what you want from these people, you have to know what they need to hear and how that works for them.
One of the things I will already say about Christine is that she is directly engaging with individuals in the audience, and engaging them. I had to laugh when she did the partner exercise with the fist, and my immediate reaction was “oh, this is so Wood Badge!” However, it was fun to see the various reactions of the people in the audience.
One of the great tools that I use often, and I’ve found it to be greatly helpful comes from a phrase that James Bach said in an interview a couple of years back… “I have strong convictions, but they are lightly held.” What does he mean by that? It means that he genuinely believes or has come to certain conclusions, and he will battle and fight for them, but if new information comes to light, we can modify our understanding and see things differently. That’s an extremely valuable tool.
With humor, a bit of silliness, and a lot of heart, this was honestly a way better talk than I was expecting. By the way, for those who want a taste of what Christine is like, check this out:
—–
I have wanted to participate in Henrik Andersson’s talk “Now, what’s Your Plan?” several times, but I have either been speaking or otherwise engaged each time he’s given it. When I saw he was doing it as a two hour block, I knew I had to get in on it. Thus, much of today will be focused on Henrik’s presentation (and thanks to Leonidas Hepas for helping facilitate the session). I think this will be fun :).
First thing we all did was write down a working definition of “context”. Simple right?
Hmmm… maybe not ;). Context is deceptively easy to define, but it’s not as easy to come to an agreement on what it actually means. Henrik of course was not content to just have people consider a definition, we needed to internalize and understand it. When he pulled out the spider robots, I smiled and laughed… and then told my group that I would need to recuse myself from the exercise, since this exercise is the contents of the “What is Context?” module that is being used in the SummerQAmp curriculum. Still, it’s cool to see how the groups are addressing the various challenges.
Without spoiling the exercise (some of you may want to do it later if Henrik offers this again, and I recommend you go through it if you get the chance), it’s interesting to see how many different scenarios and contexts can be created for what is essentially the same device.
As each team has gone through each round, changes in the requirements and the mission are introduced. Each change requires a rethinking and a re-evaluation of what is needed, and what is appropriate. This is where “context” begins to be internalized, an the ability to pivot and make changes to our testing approach based on new information. It’s stressful, it’s maddening, and it really shows that not only is context a consideration for different projects, but it is also appropriate to consider there can be different contexts for the project you are actually working on, and the ability to change one’s mind, ideas and goals mid-stream is a valuable skill to have.
What was interesting was to come back and see, based on this experience, whether or not the team’s ideas of context had changed. We can look at context as to the way we test. We can look at context as to the use of the product. We can look at context base on the people that will use it. Several of the teams had come back to their initial definitions and decided to modify them. I could be a smart aleck right now and say that this is the moment that everyone comes out and says “It depends” ;).
So… what did our instructors/facilitators use to define context? See for yourself:
——
Lunch was good, and and we are now into our afternoon Keynote. Matt Johnston from uTest is talking now about “the New Lifecycle for Successful Mobile Apps”. We talk a lot about tools, processes and other details about work and what we do. Matt started the talk with discussing about Companies vs. users. Back in the day, companies provided product to users. Today, because of the open and wide availability of applications in the app store, users now drive the conversation more than ever. A key think to realize is that testing is a means to an end. It’s there to “provide information to our stakeholders so that they can make good decisions” (drink!).
Mobile is just getting started. We are seeing a transition away from desktop and laptops to mobile (all sources; tablets, phones, etc.). Mobile is poised to eclipse the number of desktop and laptop machines in the next three to five years. Mobile apps are also judged muh more harshly than their desktop or web equivalents were judged at their point in the product lifecycle. The court of public opinion is what really matters. App store ratings and social media will make or break an app, and it will do so in record time today.
Much of the testing approach we have used over the years has come from an outside in perspective. Mobile is requiring that our testing pr
iorities invert, and that we focus on the inside-out approach, especially with mobile. What the user sees and feels trumps what the product actually does, fair or not.
The tools available to mobile developers and mobile testers is expanding, and the former paucity of tools is being addressed. More and more opportunities are available to check and automate mobile apps. Analytics is growing to show us what the users of mobile devices are actually doing and see how and there they are voting with their feet (or their finger swipes, in this case 😉 ).
A case study presented was for USA today, a company that supports Printed Paper, a website and 14 native mobile apps. While it’s a very interesting model and great benefit to its users, it’s a serious challenge to test. They can honestly say that they have more uniques and more pageviews on mobile than on the web. that means that their mobile testing strategy really matters, and they have to test not just domestically, but worldwide. The ability to adapt their initiative and efforts is critical. Even with this, they are a company that has earned regularly a 4.5 star app store rating for all of their apps.
If your head is spinning from some of that, you are not alone. Mobile isn’t just a nice to have for many companies, it’s now an essential component to their primary revenue streams.
—–
One of the unfortunate things that can happen with conferences is when a presenter has to drop out of a conference at the last minute. It happened to me for PNSQC 2011 because of my broken leg, and it happened to one of the presenters scheduled today. In his place, Mark Tomlinson stepped in to discuss Performance Measurements and Metrics. The first thing that he demonstrate was the fact that we can measure a lot of stuff, and we can chew through a lot of data, but what that data actually represents, and where they fit in with other values, is the real art form and the place that we really want to place our efforts.
Part of the challenge we face when we measure performance is “what do we actually think we are measuring?” When a CPU is “pegged”, i.e. showing 100% utilization, can we say for sure what that represents? In previous decades, we were more sure about what that 100% meant. Today, we’re not so sure. Part of the challenge is to get clear the question “What is a processor?” We don’t really deal with a single CPU any longer. we have multiple cores an ach core can create child virtualization instantiations. Where does one CPU reality end and where does another begin? See, not so easy, but not impossible to get a handle on.
Disk space is another beloved source of performance metric data. parking the data that you need in the place you need it in the optimal alignment is a big deal for certain apps. the speed of access and the feel of the system response to present data can be heavily influenced by how the bits are placed in the parking lot. Breaking up the data to find spot can be tremendously expensive (this is why defragmenting drives regularly can provide such a tremendous performance boost). Different types of servers have a different way they handle I/O (Apps, DB, Cacheing, etc.).
RAM (Memory) is another much treasured and frequently coveted performance metric. Sometimes it gets little thought, but when you find yourself using a lot of it, it can really mess up your performance if you run out of it. Like disk, if you reach 100% on RAM, that’s it (well, there’s page file, but really, you don’t want to consider that as being any real benefit. This is called a swapping condition, and yeah, it sucks).
The area that I remember doing the most significant metric gathering on would be in the Network sphere. Networking is probably the most variable performance aspect, because now we’re not just dealing with items inside of one machine. What another machine on the network does can greatly affect wat my own machine’s network performance is. Being able to monitor and keep track of what is happening on the network, including re-transmission, loss, throttling, etc. can be very important.
Some new metrics we are getting to be more interested in are:
- Battery Power (for mobile)
- Watts/hr (efficiency of power consumption in a data center, i.e. “green power”)
- Cooling in a data center
- Cloud metrics (spun up computer unit costs / hour)
- Cloud storage bytes (Dropbox, Cloud Drive, etc.)
- Time (end user response time, service response time, transaction response time)
- Usage/Load (number of connections, number of active threads, number of users, etc.)
- Multi-threading (number of threads, maximum threads, thread state, roles, time to get threads)
- Queuing (logic, number of requests, processing time)
- Asynchronous Transfer (disparate start/end, total events, latency)
Correlative graphing is also used to help us see what is going on with one or more measurements. A knee in the curve may be interesting, but wouldn’t it be more interesting to see what other values might be contributing to it?
This fits into the first talk that Mark gave yesterday, and here’s where the value of the first talk becomes very apparent. Much of the data we collect, if we just look at the values by themselves, don’t really tell us much. Combining values and measuring them together gives us a clearer story of what is happening. Numbers are cool, but again, testers need to provide information that can drive decisions (drink!). Putting our graphs together in a meaningful way will greatly help with that process.
—–
What does it mean to push, or “jam” a story upstream? Jerry Welch is introducing an idea that, frankly, I never knew had a word before. His talk is dedicated to “anadromous testing”, or more colloquially “upstream testing”. How can we migrate testing upstream? We talk about the idea that “we should get into testing earlier”. Nice to say, but how do you actually do that?!
? Sure. Have you heard of the STLC? The Software Testing Life Cycle? The idea is that just as there is a software development lifecycle, there is also a test lifecycle that works similar to, and in many ways synchronously with the SDLC.
Recent Comments