For Pro Tour Hour of Devastation, our team was joined by newcomer Louis Bachaud, a young, enthusiastic and skillful player from Lyon. It was the first time he was part of a "serious" team, and he had quite a few expectations. He had a pretty clear idea of how "the pros" tested, and he wanted to be a part of it. He was ready to play as much as the 24 hours of the day would allow him to, anything against anything—or rather—everything against everything.
Unfortunately for him, the testing situation didn't meet his expectations. He thought we would be very methodical, that we would have a whole bunch of Excel spreadsheets with all the results, all the matchups covered, stats to analyze and draw conclusions from. He was disappointed when he found out that we didn't do what he thought any serious pro team would.
When the team gathers before the Pro Tour, or before any important tournament, the intention to do everything perfectly or at least "better than the time before" is on everybody's mind. "Let's be more methodical," we say. That involves a lot of discipline, having a set schedule, what is playing against what and when.
The goal of the metagame spreadsheet is to determine mathematically what the best deck is.
The idea of breaking down a metagame and all the possible results from all the matchups seems like a good one at first. However, for results to be relevant in one matchup, you need to play at least ten games (that's the bare minimum). With ten games, you can't really draw conclusions. You can have a feeling of how the games go (favorable, average, unfavorable). To have reliable numbers, you'd probably have to play between 20 and 100 games.
If you've been on a team of pro players, you know this is not exactly what really happens. Over the last 20 years, I've been in a lot of teams. And while some playtesting sessions went better with some people than with others, the "ideal method" never worked, for the simple reasons that it's not viable, efficient or humanly possible.
Let's say that you do want to commit the time to play all 100 games for each matchup. Assuming there are five decks in the format, at a pace of five games an hour, that's 1000 games, 200 hours played, divided by half the number of players actively playing. If you are ten players, that's 20 hours of non-stop gaming.
Maybe you don't want to play the 100 games and believe ten is enough. It goes down to two hours. Your results are going to be based on "only" ten games, so they won't be 100% accurate.
Many other factors come into play: who played these games? Which version was played? Maindeck only or with sideboard?
The level of play of every member of the team is not always similar. The results of the testing might be skewed due to one or two players being worse or better than the rest of the team, which results in individuals winning less or more than they should. When a result seems wrong (or is wrong), having to play the matchup again to confirm (or contradict) the results is a huge waste of time.
It's also incorrect to assume that players at the Pro Tour (or the tournament you prepare for) are playing the latest version you found online. "Players don't play [insert card name] in their [deckname] anymore," you might hear, and that's just wrong. Everyone plays Magic differently, and plays a deck they like with the cards they like. Assuming that only one given version of a deck you want to have results of is inaccurate (for the sake of the results), it's also dangerous to forget about cards just because you don't have them in your stock lists (a stock list being the version of the deck you think most people are going to play). You will lose games at the tournament because you forget about a specific card or sideboard tech you didn't think was reliable or good enough to make it to your stock list.
Every time you make a change to a stock list, you have to play the matchups all over again.
On top of all this, you have different ways to sideboard, assuming you play sideboarded games. As a result, there's a lot more room for mistakes or inaccuracies. Are you really going to try to find the perfect way to sideboard in every matchup? Even the decks you really won't consider playing?
With all that in mind, to have relevant and accurate stats - given that there are way more than ten playable decks in a format - you'd need tens of thousands of hours of playtesting, with perfect players, perfect sideboard plans, all the different possible lists. This, as I said, is not humanly possible. Nevertheless, let's say you do try to create a spreadsheet that's as complete and accurate as possible. You then still have to weigh in the metagame, which is another variable that you're most likely going to get wrong. You crunch all the numbers in a matrix and end up with the best deck to play, in theory.
There are so many variables in your matrix that the result is likely to be wrong—and we're not even talking about the decks you don't have in your gauntlet. You end up playing countless hours just to find out that two teams found a deck you didn't. All of a sudden, all of your math is wrong…
Keep track of the results! An Excel spreadsheet is a good start, but don't trust the numbers too much. We'll see how further down.
This comes up almost every time. The concept is: everyone picks one deck (usually the deck they like the most) and play a match against every other deck, with sideboard (which usually aren't built optimally at that point of preparation). The no-takeback rule is usually enforced and players get emotionally invested in their decks. If you end up stuck with a deck you have no intention of playing or think isn't good enough, you won't be giving it your all either.
It's a waste of time: each deck isn't as represented as it will be at the Pro Tour (even if you make two or three players play the most popular deck) and a best of three is not representative. It becomes more a matter of who can play their deck best, even though ego and emotions should be put aside. However, Magic players are always very competitive, and when the word "tournament" is involved, it kicks in.
If you want to see how good your deck is in a random environment, still close enough to reality, Magic Online is probably the best place to do so.
"If the team doesn't end up in a consensus on a deck to play, then the testing has gone wrong."
That's a misconception.
Having a team is important. You need teammates to build decks you wouldn't have thought of, have fresh opinions about your card evaluation, sideboard options, and most importantly, have people to play against. But in the end, your deck choice is going to be personal.
You have to play a little bit of every deck, at least to know which ones you don't want to play as well as in aid of the common goal: understanding the metagame. Because yes, the team does need to spend some time trying as many decks as possible against each other—not necessarily in a methodical way—but to have a feeling of which decks are good and which decks might deserve more attention.
If the team thinks one deck is good and it's one of the decks you don't like, you may feel it's the wrong choice either because you don't have the right feeling with it or that you don't feel confident enough playing it. And it's not a "team failure," it's just your own discomfort, which is a difficult thing to quantify.
In the end, you'll choose the deck you want to play because you know it's good, you know how to play it, and are ready to play any matchup with it.
Strict methods do not leave much room for inspiration.
This is what goes into a successful playtest session:
First of all, a lot of decks have to be built. The longer the preparation goes, the more decks have to be built, both the ones that are successful on Magic Online at the time, and random brews people think of. If the brew looks bad, it's not too important, just put it together, play a couple of games with it—maybe you're onto something. Better try and fail than not try and miss the best deck of the tournament.
Everybody has to play. It doesn't have to be methodical or very organized, but a little organization doesn't hurt. The idea of keeping track of the results is to remember how the matchup went: did it feel favorable? Average? Bad?
Also, noting why a matchup is favorable (or not favorable) is necessary when you report the results of your match. Knowing what's good against what and why is key to understanding a format, especially when it's new. When someone else is testing and reporting the numbers only without notes, you don't know why Deck A is favored against Deck B. You have to take it for granted. You like Deck A, see that it has problems against Deck C, so you change a few cards to fix the matchup. Then you have to play the A vs B matchup, to make sure the changes didn't affect that one too much. And since you personally haven't played that match, you'll have to play the games over again. If it's still favored, then move along. If it's not, then you lost a significant amount of time.
Once a lot of games have been played, someone on the team has to be inspired by a deck. There's always someone convinced they have the best deck. The team should help them figure out if the deck is good and deserves to be improved, the rest should keep looking.
Sometimes there is more than one deck that the team finds that feels good. There's no problem in the team splitting up and trying to improve both decks at the same time.
The more focus there is on a team deck, the faster and more efficient the improvements will be. It has to be built in multiple copies, and all the sideboard plans have to be tried out. If a certain matchup seems bad, then you need to improve it or find a good sideboard plan. The more copies of the deck built, the more sideboard plans you can try.
The team needs breaks. A lot of them. Go out, to the beach if you're by the sea. Put on your running shoes and go for a jog, or even better, put on your gi and go for some rolls (if you're a BJJ fan). Go for some sightseeing. Going away from the game for a couple of hours a day is part of the preparation. It helps with the inspiration part. Playing non-stop creates tunnel vision and you won't be able to find fresh ideas.
Another good way to get inspiration flowing is to go for a meal with a few teammates—not necessarily the whole team—to share your results and ideas in a different environment. It could create some new perspectives you hadn't thought of before.
I don't consider myself a very good deckbuilder, but when it comes to choosing a deck for a tournament, I trust my gut and my preparation process. Over the last five Pro Tours, I played a deck very close to the winning decklist three times (White-Black in Sydney, Mardu in Dublin, Zombies in Nashville) and the two other times I was happy with my deck choice (Blue-Red Stitches in Hawaii and Marducks in Kyoto).
When approaching the highest level of tournament preparation for the first time, a newcomer may be surprised to discover what potentially looks like a disorganized and unstructured approach to testing. For those who have never experienced testing for something like a Pro Tour, expectations of detailed Excel spreadsheets, team leagues, rigorous team unity, and ordered testing processes may actually hamper some of the most important elements of finding the best results. You need to feel comfortable with your performance, play, and teammates, but most importantly you need time and space to find inspiration.
From my perspective, the best testing results come from a balance of all these things, and I believe these principles can be applied with preparation at every level of the game. What are your thoughts? How do you prepare for a tournament? What do you have to do to feel ready to compete?