Choosing Testing Methodology
Choosing what methods to use depends on the project, the stage of the project, and the goals of the test. The three types of methods that I typically use are Interviewing, Usability Testing, and Playtesting. For each of these methods, I gather qualitative and quantitative data that when analyzed separately can be moderately useful, but when analyzed and cross-referenced together they become extremely valuable. Because of how useful combining the two different types of data are I have created a steady routine in any of my research designs to ensure that I am gathering both in some capacity no matter the goal.
The most common way I test has been to playtest. This is a very valuable way to get quantitative data gathered from in-game metrics and a lot of qualitative data from player feedback during the test as well as through pre-test surveys and post-test surveys. I then take that data and analyze it thoroughly until I feel confident that I can make a report that will give the team actionable observations to work with. For surveys, I try to be consistent with questions in the survey from build to build so that in the report I am able to reference previous reports' results to give a better understanding of how the testing went with the current build. A very crucial process is designing how and what data is gathered by the game and dumped to a spreadsheet (a small section can be seen below). "Match ID" and "Player ID" allows me to analyze specific matches (games) and players that may need to be more closely looked at to see what was different between other matches and players. Probably the most important part of the design is the "Playtest Token" which allows me to easily reference the in-game data with the survey data (also has the same "Playtest Token").
Once the data has been gathered I can then start visualizing and analyzing it. The below visualizations are from a milestone playtest report for Quasar League that was used to give the team and me an understanding of how players performed while playing. While analyzing this data I was trying to understand why 24% of our players were finding the game frustrating and thought I pointed a few possibilities that could have all contributed to this issue. However, when nearing the end of the report I noticed that 3/7 of the matches that were played were very one-sided games (bottom left) which inspired me to see if those matches had a relationship with the players that related the game as frustrating, which it did (top left). This finding combined with previous findings of players not scoring as much as we wanted, certain characters performing in unexpected ways and other unwanted behaviors gave the team an actionable solution to improve the overall experience of the game. Without this analysis process, these observations would never have been made and the simple solutions that were implemented would never have been realized.
Player/User interviews are extremely useful at the beginning process of a new game or feature. I gather participants to gather data on what players expect, want, and do not want. Interviewing gives a lot of qualitative but gathering quantitative data can be more of a challenge. Structuring questions where "numeric" responses are given and quantizing participant feedback is how I solve this issue.
For example: when asking players what interactions they would want with potential new feature "Post-Game Heatmap screen" (Quasar League) I would ask them what interactions sound like something they would like to use to manipulate the heatmap (filters, timelines, etc.) and then to rate the importance of these features from most to least. This made it clear what features we should keep or add for our players and what features we should cut to save development time and keep the experience as simple to not overload the player with information.
I typically do usability testing when trying to understanding players' experiences with controls, understanding new features (menus, new mechanics, level design), and getting a more intimate understanding of the player experience. This method of testing is very useful when I want to easily get quantitative and qualitative data together given the rather focused testing. Data can be gathered within the game to get the quantitative data while I can observe and recording user feedback as they play.
During the early development of Project Janus, I utilized Usability tests where players would play through a level and with a focus on usability problems when completing the level. This would give us a better understanding of things like: Where were they getting stuck? When would they lose track of their progress? What mechanics were frustrating or unclear to the player and why? Through these usability tests, we were able to pinpoint where improvements were needed, like simplifying unnecessary difficult tasks like jumping, convoluted level design, or unclear UI.