It started as a joke. Having recently watched the Big Bang Theory episode `The Herb Garden Germination` and reviewing too many resumes listing the same buzz words over and over, I had an idea: create some term or concept, inject it into the wild with enough backing information to make it sound legitimate, and see if it ever made it back to me in a resume and, if so, how long it took to get back to me.
I needed something in an area that was being used where I worked as well as in enough other environments to be feasible, yet wasn’t widely popular. The perfect candidate seemed to be model-based testing. So, I came up with the concept of Cross-Matrix Defect Analysis – multiplying a state transition matrix by a matrix of known defects to get a sort of weighted state transition matrix. I worked up a few formulas, wrote them on a whiteboard in a prime location at work, and recruited colleagues to help me plant the seeds so that when someone asked, “What’s that?” they could respond, “Oh, that’s something that Michael is working on for our model-based testing called Cross-Matrix Defect Analysis.”
But the more that I thought about it, the more I realized that there was actually something to this Cross-Matrix Defect Analysis, something beneficial to our model-based testing framework. We could rework the idea a little, substituting a state adjacency matrix for the state transition matrix, do a little matrix multiplication and row-normalization, and come up with a fairly quick and simple way to generate a state transition matrix based on some measured quantity, such as defect populations.
The typical adjacency matrix, represented here as \(A\), is a \(nxn\) matrix (where \(n\) is the number of states in the model) where the entry \(a_{ij} = 1\) if state \(i\) is adjacent to state \(j\), and \(0\) otherwise.
If we let \(B\) be the \(nxn\) matrix representing some measured quantity with respect to the application, such as the number of known defects, where the entry \(b_{ii}\) represents the frequency of the measured items present in state \(i\), then we have a diagonal matrix (entries only on the diagonal of the matrix).
If we then multiply the two matrices \(A\) and \(B\) we get another  matrix, \(C\), which is an adjacency matrix that has been weighted with respect to the measured quantities:

$$C = [A][B]$$
If we then compute the matrix \(C’\) by performing row-normalization on the matrix \(C\), letting \(c’_{ij} = \frac{c_{ij}}{\sum_{j=1}^{n}c_{ij}}\), then \(C’\) will be a stochastic matrix where \(c’_{ij}\) can be interpreted as the probability of transitioning from state \(i\) to state \(j\) weighted by the frequency count associated with state \(j\).
However, this method of generating a state transition matrix can result in unreachable states when the measured quantity for one or more states is zero. For example, if we are using defect populations for weighting and no defects have been identified for the login screen, then the probability of reaching the state representing the login screen would be zero, meaning that the login screen would never be reached. In many cases this issue can be overcome by applying the constraint that each state in the model must be reachable, and requiring the frequency count for each state to be greater than or equal to one. This can be addressed by incrementing the count of each \(b_{ii}\) entry by one, which can be accomplished by adding the identity matrix, \(I\):
$$C = [A][B + I]$$
This also causes an issue because we have compromised the accuracy of what we are using to weight our state transition probabilities. But is the solution it provides good enough to solve the initial problem we’re trying to solve? It does cause a perturbation in the values (TheObserver Effect) used to generate the transition probabilities, but does that really matter? It’s often the case that the degree to which the counts are affected can be considered negligible or minor. For example, when referring to defect populations, are we counting all defects or are we are counting known defects with the understanding that there may be one or more undiscovered defects? If it’s the later, the incrementing our count by one could be OK. In other cases
what we are measuring may be a highly-subjective estimate, such as expected traffic through a particular function of the application, or perceived risk.
The point is that we often apply heuristics to help us establish probabilities (not certainties) of execution flow through a system. Generating weighted state transition probabilities following this method is simply another application of a heuristic – it yields an approximate solution which may be considered good enough in some contexts if we are willing to exchange optimality, completeness, and accuracy for an approximate solution that we calculate quickly.
If we are willing to accept these trade-offs, then we can then expand the result, and let the matrix\(B\) denote any \(nxn\) matrix that represents any known, estimated, or heuristic measure, such as
  •          Defect Populations
  •          Defect Injection Rates
  •          Business Criticality
  •          Function Points (complexity)
  •          Application Use (traffic)
  •          Application Change

The application of this method has many benefits, including producing a good enough solution quickly enough to solve the problem. Furthermore, using model-based testing and any quantifiable measure that can be applied to each state of the model, we can generate a transition probability matrix which may then be used to automatically generate test cases that are statistically directed towards areas of the application affected by that measure.  It also means that we no longer need to guess what the state transition probabilities are, which in turn implies that the reliance on domain knowledge to generate transition probabilities can be removed, thereby eliminating a bottleneck and point of failure.

By the way, I’m still waiting for Cross-Matrix Defect Analysis to show up in a resume.