How Can We Effectively Emulate Load on Web 2.0 - RIA Applications?
I started this blog post yesterday whilst en-route to Atlanta GA from the BFusion-BFlex conference in Bloomington, IN and noted where I was as I wrote...
I am currently sitting in Indianapolis airport after a really great conference, well conferences; BFusion and BFlex in Bloomington Indiana. Bloomington is a beautiful town and Indiana University has an very impressive campus and even nicer set of people headed up by Bob Flynn. As Dan Wilson said, as we headed back to the airport, this was a top-class community event and the fact that it was free is marvelous for all of us.
So I wanted to get to the main point of my post, I wonder if with true Web 2.0 and RIAs, that we are moving into a distributed and wild client server paradigm? If that is so, I wonder how we will be able to effectively load test such applications before they are launched? My point of view is that no application should be launched without effective load-testing. For the past 8-9 years I have been involved in load-testing web applications, ideally before they were launched but often after they were launched and were experiencing performance or stability problems or both.
Load testing web applications mostly involves applying load to the server tier (web, database, LDAP etc) by creating scripts, often by recording browser sessions and then randomizing those by importing external data from files such as .txt or .csv files. In this paradigm just having, let's say, 1-5 client machines running the load tests was adequate. In my opinion, things are changing fairly rapidly; here are some reasons. Before going in to detail I just wanted to relocate myself to where I now am flying through 20,000 feet en-route to Atlanta GA and I am listening to "The Wall" by Pink Floyd. Since aircraft were invented they have typically had one through four engines and hopefully they are tested somehow before they fly. I wonder how the aerospace sector would react if almost overnight aircraft could have a thousand engines or more? Please read on if you wish.
In typical web applications there two TCP/IP ports that are used, port 80 for standard web browser-server applications and port 443 for SSL communications. These parts are at the web server end and are largely if not wholly controlled by the web server. Another important point to consider, the recycling of threads quickly and efficiently is key to efficiency in web applications. In all of my hundreds of on-site and remote projects, threads that do not release properly are almost always very bad news. In many Web 2.0 applications threads need to be kept open and often on ports other than 80 or 443. In client-server applications this constant connection need was not so much of a bad idea because client-server applications would typically run over controlled and known local area networks (LANs) or wide area networks (WANs). The only company that I know of who somewhat successfully enabled client-server operation across the Internet is Citrix and that can still and often does levy performance degradation.
So back to Web 2.0 and RIA's I feel we have to change the way we think and prepare for web applications before we get hit so hard that it will be difficult to recover. Michael Labriola and I were discussing this overall subject at BFusion-BFlex. The reality is as a community we really do not know what the ramifications are but we absolutely need a ways to predict them; we need to be able to create load tests that can simulate what we are about to receive.
The track I just listened to from "The Wall" was "Comfortably Numb" and that is the currently state we are largely in, in my opinion. I would really welcome feedback and opinions on all of this, in the meantime I will continue to push forward with efforts to find ways to effectively create tests to predict what could happen. There will be more on this subject soon.