Think like a user, but break software like an expert!

Testing From The Trenches...
From Someone Who Has Been There and Done That!

Wednesday, July 7, 2010

Google, YouTube and My Favorite Security Test: XSS

This past weekend it turns out that YouTube was hacked using a XSS attack in the comments section of a YouTube site.

What you may ask is a XSS attack?  It is a method in which Java Script and HTML code is inserted in between script tags resulting in malicious code being executed.  In the case of Google's YouTube, the malicious code redirected fans of Justin Bieber to a pornography site!  While it's not nice to send young kids to a pornography site, it could have been much worse for Google had the hackers wanted to do something more serious, like attacking the server, getting accounts/password, well...you get the idea.

Google is very big on automating regression testing of new builds and probably those tests were executed on this build of YouTube. Probably unit tests were executed as well.   The problem is that the automation tests must not have tested XSS attacks on comment fields or other input fields, or if they did test for scripts, they probably only tested that the first script tag was escaped but then did not check for what came after that.  As it turned out, the hackers inserted their malicious code after the first script tag was escaped but the second script tag was not.

A good manual tester with a desire to 'break things' would have found this security hole before the code went live just by playing around with the input fields.  If this had been found ahead of time, the developers would have fixed the code and the build would never have gone out.  I am sure that Google and YouTube were embarrassed by this incident.

As I said, XSS attack testing is my favorite.  I know that if I can get some code to execute, then a hacker with a malicious intention will be able to exploit the security hole more than I would even know how to do.  My job as an Exploratory tester is to just find the hole.  That's the fun part.






Tuesday, May 11, 2010

New Google UI...Has it been tested?

By now you have probably seen the rollout of the new Google UI. 

I have done some exploratory testing on it, and I am actually shocked that Marissa Mayer would let the new UI be released without it being very thoroughly tested.  If I could find some issues in a couple of minutes of playing around with it, I am sure that people using it are going to find that the UI is inconsistent across all apps and that the look is not very 'clean' or uniform anymore.

Undoubtedly there has been automation testing done on the basic functionality of the UI and probably the tests passed, because if you put in a keyword you get expected results.

My problem with it is that if more Test Engineers were doing exploratory testing, they would have seen some of the following issues. 
  • Images search has 'advanced search' under the search input box, but the Safe Search dropdown menu is set off to the right and looks awkward there.  If you click on it, it opens a dropdown menu that covers up one of the image results.  My question is, if the menu is going to cover up one of the image results anyway, why didn't they put it under the search box where it would look cleaner?
  • Another issue, related to the new left column 'Everything Menu' is that once you click on Maps, the entire new 'Everything' search structure is missing.  The old Maps UI displays and has no links on the left side to get you back where you started.  
  • One other search app that also has a problem with the new Everything Menu is Shopping.  Again.  Not all of the apps are listed on the Everything Menu, so you can't easy get back to a previous type of search.
  • For most of the other apps, the links in the left column Everything Menu are displayed, so you can easily return to a previous type of search. This is the way it should be.

Did someone actually look at this flow and try it out?
Just wondering.


Monday, May 3, 2010

The Census Is Every Ten Years...What About Testing It?

The Associated Press had an article recently entitled "GAO: Census has computer problem".
It turns out that they are running into a little glitch.  According to the article 'The bureau's Paper Based Operations Control System did not function reliably in tests and, despite hardware and software upgrades, "may not be able to perform as needed under full operational loads," the U.S. Government Accountability Office said in a report.'

So, my question is, if the Census is taken every 10 years, why weren't they able to get their Paper Based Operations Control System tested and working in time for the 2010 Census? It turns out that the Paper Based Operations Control System was developed in early 2008, so that is a little later in the cycle, but the question still remains that given 2 years to get it tested, debugged and working reliably, someone dropped the ball.

Besides normal debugging of the software, when you are running a massive data collection service, like the U.S. Census, load and performance testing has to be right up there in the 'to-do list'.

As a result, Census Director Robert Groves said that "We will get the census done with this system. The question is, will everyone be smiling when it is done."  He also said that they will be spending more money on staff in order to finish the work

So it looks like they will be using more people in this process (which in this bad economy is good for unemployed people who have gotten a job with the Census), but ultimately it shows how important proper QA is.  Had they put enough QA Power on this project, the count of the 2010 Census would be done way before it is time to start the 2020 Census!




Wednesday, March 17, 2010

Security...What Security?

Yesterday I attended the Cloud Connect Conference in Santa Clara.  I enjoyed the keynote speakers and the breakout sessions covering various aspects of the Cloud frontier and the implications of working with new and older companies and with dealing with data that is stored off-site in the Cloud.

What does all of this mean for us?  Early adopters are of course, already there, storing data, sharing documents and collaborating in the cloud.  However there are some companies that are still entrenched in having a large IT department on site and in using data and software that is tethered to their intranet  and desktop computers.  The main take-away on Cloud Computing is that we are already in the Cloud and that the companies that are still entrenched in their own systems will need to migrate to the Cloud in order to be able to work anywhere and anytime.

This brings me to the issue of security.

I attended a fascinating session on "The Future of Cloud Security:  Panel Discussion About Security the Cloud Ecosystem - Sponsored by McAfee".
Members of the panel were:
Moderator - Charles Var, Director, McAfee
Speaker - Ronald Knode, Director, Global Security Services, CSC
Speaker - Shahed Latif, Partner, KPMG LLP
Speaker - Niall Browne, CISO & VP Information Security, LiveOps
Speaker - Scott Chasin, CTO, McAfee Software-as-a-Service

It seems that everyone is trying to make the Cloud more secure so that data, usernames, passwords, documents are safe and so that users will have trust in the system. There are standards for the Enterprise right now (SaaS compliant, etc), but there aren't the same standards yet for the Cloud.  The panelists said that we will see more standards set and companies boasting of having such and such compliance standard.

This brings me to 'Security...What Security?'

The problem that I see is that we will never be 100% secure in the Cloud, because we can't even be 100% secure when not in the Cloud.  There are so many inter-dependencies of companies that are collaborating with each other that if one part of the chain is not 'secure' then other members of the chain can potentially be compromised.

The latest hacking episode that surfaced that involved China hacking into Google, Yahoo and other companies turned out to be related to a program that is widely used for software development called, Perforce.  Who knew?  Everyone assumed that Perforce was secure, but it turned out to be the weak link in the chain.  Even if each company possibly had a certification for security, any new build or release from the company after the certification happened, could again compromise everyone.  Just look at all of the security patches that Microsoft has released and you will soon see that there are holes everywhere that we try to patch and fix, but during the time of identifying the problem and patching it, there is potential for a security breach.

So, what do we conclude from this?  We are not going to hold back the Cloud.  It is here, and it is the future.  We need to rely on security firms to find and identify security holes and then quickly release patches and then we move on.

Computer security is just like any other type of security.  We are mostly secure but never 100%.  Best advice...don't make yourself crazy about it.  That's just the way it is.



Thursday, March 4, 2010

Method to My Madness

So you might wonder, "What does an exploratory tester do?  Does it really count as testing since it is not automation testing?"

Since leaving Google and starting my own company, I have run into companies that only want to hire automation test engineers or companies that think that exploratory testing can be easily handed off to off-shore teams without having someone from their company monitoring and guiding the testing.  The idea is that exploratory testing is so simple that it can be handled off to teams that don't have in-depth knowledge of the product.  They can just run some test cases while the automation testing covers the 'real' test cases.  Not so!

From my perspective, exploratory testing is an art...plain and simple as that. An outstanding exploratory tester takes great pride in breaking software, in finding the holes in the code, in stressing the system in a way that the software developer did not anticipate.  That is the fun in it, when you really love this type of testing.

Think of it this way.  Would you rather have your users find the bugs or give up using your site because of the bugs, or would you rather have someone with experience find the issues first so that your users will think that your site is awesome and easy to use?

Think like a user, but break the code like an expert!   That's what a great exploratory tester can do.

Let's take a simple example.  You have input fields on your site for people to sign up and get a login name and password. If the code is written properly, the fields will have limitations (number of characters allowed, types of characters accepted and those that are not allowed).

What happens if you haven't set the limits properly?  You leave yourself open to buffer overflows or cross-site scripting attacks, for example.  Also, if there are no limits to the fields, and you accept 200 characters for the username, what happens when you show the next screen that says "Welcome" and then proceeds to print out a name that is 200 characters long?  What happens to all of the nice formatting on the page that you were expecting?  Not looking so good now, is it?

A great exploratory tester thinks 'What if _____? and then proceeds to test it out.
A great exploratory tester truly enjoys this process of not following the expected road.
Anyone can do what is expected.  The interesting results come when you don't follow the expected path and then do what seems 'random'...but actually is well thought out and based on testing skills.

Many companies are using their software developers to test new builds.  Big mistake.
You wouldn't want your physician to do your dentistry, would you?  They are both doctors, but it takes a specialist to do the job right.  Don't let your software developers do all of your testing. Sure they will find some of the issues, but as your company grows, so does the expectation that your site will work all of the time..  People can be fickle.  If your site is down too much, loses data, releases features that are unacceptable to your users, you will have people looking for other options.

So what do I mean "Method to My Madness"?  I mean that exploratory testing may seem like 'madness' and 'randomness' but it is really well researched and thought out, educated 'madness'!





Thursday, February 18, 2010

Google Buzz and Testing

As everyone has heard by now, Google released Buzz and then had to do a Code Red to fix some privacy issues and concerns that were raised by the public.  One issue that seemed to concern people was that everything was shared by default with all of one's Gmail contacts.  While this is a quick way to get the 'social' part of Buzz up and running for users, the Buzz Team was surprised to learn that people in the real world objected to this 'convenience'.

I give Google a lot of credit for responding so quickly to a major public relations problem and rolling out a fix.

However, I think that this problem could have been avoided if Buzz had used a team of exploratory testers, and if they had 'friends and family' test as well. 

The reason I am saying this is that, in general, software engineers and other highly technical people don't 'think like a user'.  They tested their code, and I am sure that it worked.  The problem here is that it really needed to be tested from a user's perspective.

I was a Test Engineer at Google for 7 years, but now I have my own testing company, TheTestingTeam. I say this to let you know that I am very technical...especially when it comes to testing websites.

My first reaction was shock when I tried Buzz and it wanted to use my location, which at the time of testing it, was my home.  They were going to share my post and location with everyone. I could see where everyone else was when they posted.  I just didn't feel comfortable with that idea.  I quickly realized how to keep my posts private, but I am sure that the average user would just take the defaults. That's where the problem was.  There was too much sharing set up by default.  This works fine for developers and highly technical people, like Googlers who can easily figure out how to configure the settings to best suit themselves.

Had the Buzz team used 'real' people or testers who think like users, this problem could have been identified and addressed before the release.  This would have saved a lot of bad publicity for the company.

I am not picking on Google, but rather just using this release of Buzz as an example of how important exploratory testing and usability studies (with real users) is to the successful release of a product. Engineers shouldn't assume that just because they think that their product is amazing that everyone else will feel the same way.

When software engineers write code, generally they write it in such a way that it works!  They have tested it, and if you do everything as expected when using the program, it works!  It takes an exploratory tester to do the unexpected.  That's where the bugs and the major issues are. Ultimately if the end-user doesn't accept the new product, doesn't like the new product for some reason, or gets frustrated with it, it doesn't matter how great your engineering is, the product won't make it in the real world.

Think like a user, but break software like an expert!