For people with investments in the stock markets, 2020 has been a rough year. February 12 showed a high point on the three major indices in the US – the Dow Jones Industrial Average (DJIA), the NASDAQ Composite and the S&P 500. Globally, every major index showed a similar high point around that time. Starting February 20, things got interesting.
A combination of factors came together:
- An oil price war between Russia and OPEC drove prices for oil and gasoline down
- The growing awareness of the spread of the novel coronavirus played a part
- The normal agencies of cyclic market “corrections”
While some analysts point to February 20 as the beginning of a crash, many did not become really aware of what was happening until March 9. That day, the DJIA, NASDAQ and S&P 500 all plummeted. The futures market for the DJIA dropped 1300 points before the opening. This triggered an “automatic pause” for 15 minutes. When markets actually opened, they opened 500 points lower.
Global stock markets acted in unprecedented ways. There were huge swings up and down apparently in reaction to news of coronavirus or stimulus packages or other events. This continued from mid-March through the beginning of June.
What does this have to do with software? Everything.
On March 19, Marketplace, a daily news show covering business and economics, ran a story about a COVID-19 related change to the NYSE. They had comments from Justin Schack from Rosenbloom Securities. They also had comments from Stacey Cunningham, the president of the NYSE.
The story seems a cautionary tale in retrospect. In it, the report stated the NYSE trading floors would be closed starting the following Monday, March 23. The majority of the trades which would occur would do so using software alone. No human floor traders would participate. This raised concerns for some because, in the past, human floor traders noticed a software problem and intervened.
In 2012, a software change for Knight Capital introduced a significant problem which resulted in a stock trading disruption. According to Schack, it was human floor traders who recognized the price swings as well out of line for the stocks involved. The humans saw the errors and stepped in. Other electronic trading software reacted as they were designed. They took the transactions as legitimate and acted on them with their own trade orders.
The humans acted together and “shut down” some of the stocks that had opened at “crazy prices”.
Other stock exchanges have been running electronically for some time. The NASDAQ, for example, runs without a conventional “floor.” Still, the NYSE going to an all-electronic format proved interesting.
From March 23 through May 26, when the trading floors reopened, most of the NYSE transactions were handled electronically. Stocks still traded. Major indices such as the Dow Jones Industrial Average still did strange things. Media reported on the strange things and how “the markets” were responding to world events.
Human Interaction
During that eight week period, no humans were watching other humans wrestle and grapple with the ideas of what was happening “in the market” at the moment. There were no facial expressions to read because humans normally trading on the floors were not there. The usual discussions and negotiations at the end of the trading day did not happen.
With the bulk of the transactions relying on software, conversations between floor traders did not happen. Sharing information and insights with clients became less certain than it had been before.
On May 22, Stacey Cunningham returned to Marketplace. She talked about the economy in general and how the “economy is not the stock market.” She also talked about reopening the floor of the NYSE the following Tuesday, May 26.
She said one thing around trading stocks which struck me as important. She said it about opening the trading floors and not staying in the full-electronic trading model that had been in use since late March. She said, “What we are missing… is the value of human judgement.”
The technology behind electronic trading is amazing. However, all software will respond only as it has been designed and written to respond. Even the more advanced AI products need to adjust, adapt and “learn.” Unexpected shocks, such as a combination of oil price war and global pandemic impacting economic systems in every country, might not be on the top of the contingency planners lists when designing the software.
Knight Capital is likely the best-known example in stock trading of this phenomenon. We see similar issues with autonomous vehicles, facial recognition systems and other artificial intelligence applications.
Human Judgement?
The number of instances of “thinking,” “intelligent” or “learning” software grows every year, As these applications grow in complexity they present new problems. As they grow in reliability, they reveal other problems.
The growth of “crash avoidance” systems are becoming common for most makes, if not models, of cars. These systems detect obstacles or conditions which may lead to a crash and engage the brake system if the operator does not. They detect a potential problem, slow the vehicle down and, hopefully, the driver responds and takes over to limit or avoid a crash.
If the driver presumes the system will avoid accidents, not merely limit them, is it likely they focus on their driving in the same way they would without it?
Similarly do the “drivers” running cars in automated, autonomous or “auto-pilot” mode fully understand the potential for problems? Do they understand the need to stay engaged and focus on road and traffic conditions as if they were driving manually?
People are distracted by something. Their focus is off what they are supposed to be focused on and there is an accident. A crash occurs. It is possible that everyone walks away unharmed, but for some, there are injuries.
Human Judgement
Can we really count on people to stay focused on results from automated testing if we can’t rely on them to remain focused on driving a car? When lack of focus can end in injury or death of the driver, their passenger or other people can we expect them to remain focused when using tools to make and test software?
Most of us like to think we can remain focused throughout the process of testing or reviewing test results. However, we get tired, sometimes fatigued. We get distracted by people asking questions or emails or meetings.
Even when we are scanning logs or other summary documents, are we paying as close attention as we might be? Do we see everything on the first pass? Do we even consider taking a second pass?
The problem is, each of us wants to believe we are completely thorough and professional and focused all the time. We aren’t. We miss things. We will miss things in the future.
Are we aware we have missed things? Can these be tracked and mitigated? Can we do better?
If we are not aware of these areas of our own limitations we simply cannot improve. We cannot solve a problem if we are not aware of the problem. This is remarkably true when the problem is within us.
Distraction and Selection
Without being aware of our own vulnerabilities and limitations, can we be sure the tools we build are free from our own behavior patterns? How can we open our awareness that they even exist?
When it comes to testing software or developing test automation, much of the time these complex processes are treated as simple, straight forward exercises. They are regarded as something that is obvious.
The only obvious thing is, most of the time we don’t know what the full range of possibilities is. They are limited by our perception, bias and, simply, lack of focus. We cannot focus on the immediate task because other things are drawing us away.
The idea of a selection algorithm is that we can identify the kth smallest number of options in an array, where k is the total possibility. I see the Selection “Problem” in testing is around human judgement and the ability to concentrate efforts on a task. It manifests itself in a couple of ways.
The first is that humans will continually find new ways to confound prediction, simply by acting like humans. We cannot identify all the possible actions they might take, therefore we will find predicting them to be impossible. The best we can do, then, is to identify the most likely actions and work with those.
The second is humans, no matter how much they try, will be continually distracted from the task at hand. They are challenged to focus on driving a car by the very things added to the vehicle for the comfort of the driver and people in the passenger compartment. What was a “radio” not that long ago is now an “entertainment system.” At least one of the fatal Tesla accidents with a car in autopilot involved a driver reportedly playing a video game on his phone.
The Knight Capital problem was the result of an error in deployment. Reportedly, the new code was not loaded to one of the servers. We do not know precisely the sequence of events here. However, it serves as a reminder that even if the software itself is tested exhaustively, other errors can and will occur.
Software
We become reliant on software more and more. We count on it to make good decisions and relieve humans of mundane tasks. This is an excellent idea. As long as things work and the humans involved remain attentive.
Still, most people engaged in testing software have a tendency to look for confirmation of behavior. They look to see that it does what the requirements say it should, what the design says it is intended to do. The result is they often will limit themselves and not be aware of what else may happen in the application or the broader system.
The Selection Problem is not caused by a problem in the software. It is caused by our lack of awareness and ability to focus on the task and the implications of what we are making. The software will reflect the beliefs, biases and understanding of the people who make it. It is a reflection of those things even if we wish to deny we have any bias.
As testers, we need to be aware of these things, perhaps more than others who are involved in making software. We must be engaged in what we are doing and everything around us even if not specified beforehand. We must be aware, as a person driving a car, of the changing and developing situation as we move down the road.
We then have a chance to find problems before catastrophe strikes. It can be difficult, but not impossible.