November 11, 2020, 1:15 am
It’s hard to describe 2020 in terms of fantasy hoops, but like everything else 2020 clubbed fantasy basketball over the head.
And I thought there was a chance the post-mortem analysis might prove to be too noisy, but it wasn’t. So let’s get on with it!
This lookback has been a cornerstone of the Bruski 150 for the last decade. From this we build every year, looking at the highs and the lows and everything in between.
Editor’s Note: HoopBall has released its new products for the 2020-21 season and you’re going to love them all! Our monthly memberships have brought together the best of HoopBall, including the FantasyPass which gives you all things Fantasy including season-long, DFS, and Dynasty coverage, the Bruski 150 AND coverage of the NBA Draft and Free Agency. Our brand new WagerPass brings together some of the industry’s best handicappers to give you a smarter way to sweat. And our HoopBall 360 product gets you everything we offer including the Bruski 150 BEFORE it is released into the 2020-21 Draft Guide by TWO WEEKS. Click here to check it all out!
Now that I’ve buried the lede … last year we crushed it.
It was almost the TIDAL WAVE we talked about in the preseason and throughout the year. FOUR out of FIVE of my NFBKC and big money ringer leagues had top-3 finishes. When the season stopped suddenly my teams were loaded. The B150 was stocked up with massive wins everywhere.
Part of that was the fact that it wasn’t a public year — which we expected after all of the player movement from the previous offseason. So not only was it a chaotic year before the shutdown as the market figured things out, the shutdown and all the subsquent events leading up to this moment mean we have maybe our best chessboard ever. I am extremely excited knowing that the degree of difficulty preparing for this season is about as high as it may ever get.
But before we look ahead to 2020-21, we need to look back at how we did.
As usual, we look back at how we did against the fantasy public using ADP data from draft season and then against the best big box fantasy site out there — my old pals at Rotoworld.
We also want to keep tabs on the world of big money competitions and though I got knee-capped by one big money COVID finish (3rd instead of what was almost assuredly going to be a 1st), my second place in the $30,000 NFBKC Super was solid and I had an easy doubling of my entry fees last year. The only thing I’m irked at all about is the fact that this could have easily been a run of 1st places. I had teams with Steph Curry and Zion Williamson (pre-injury) on them going huge, regardless of the injuries.
As longtime readers know – half of the guys I play against in these big money leagues continue to use my list against me. One guy has cleared $100,000 over the last few years in various ringer leagues using our guide. There’s no honor like being bracketed in a snake draft against the top players in the world – and they all know who they better be scooping up.
As for the comparative analysis, as I’ve done for five seasons now, I like to use my old friends at Rotoworld as their news feeds are everywhere and their info forms both the public opinion and also the intermediate-to-expert level opinions you’ll see out there in the fantasy space.
In a truly grueling and inane exercise, we look at the Bruski 150 to find out how we did in a few different lights. In each of these analysis I welcome anybody to offer a counterpoint, but the key to this is being brutally honest with the assessments. What we’re looking to determine is which rank was the smarter rank, accounting for the totality of the situation.
The first and broadest analysis is a rank-by-rank assessment which determines an overall record between the two sites. After all, each of the ranks matter in some context so zooming out to see what the aggregate win-loss records are — is a good way to show an overall strength of ranks. It also keeps a few good or bad ranks from swinging the analysis.
Then we look at the ranks while accounting for how important a given play was. First we do this by assigning an impact rating. Then when looking at the ranks in relation to ADP, we’re looking for how likely or unlikely was a site’s followers to get the good (or bad) pick.
Finally, we dive into the movers and shakers to see which set of ranks did well on those needle-moving plays.
The good news is that we ran the score up to a perfect 5-0 record over the last five years, beating Rotoworld 77-58 in 8-cat at a 57 percent clip with an outstanding 73-58 run in 9-cat leagues for a 56 percent mark.
Anytime one can have that type of an advantage against some of the best in the industry, it’s a great year, and for fantasy players who are looking to pair their own thoughts with the best in the business, there’s only so much room for voices. So hopefully this helps inform some of your choices in that regard.
Trying to take a deeper look (nerd alert), I created a methodology for determining how good or bad a recommendation was. It has two parts:
1. I use color schemes to measure ranks against each other and in relation to ADP
2. I assign ranks a grade on an impact scale of 1-5
We eventually end up multiplying those together to create a loose system – a starting point if you will – for comparing prediction sets. As described later, the tension between these two aspects of impact vs. ranks and ADP allows us to hedge a bit when the qualitative analysis starts controlling issues.
RANKING VS. RANKING
The color schemes are:
• Dark Green (massive win, easily had opportunity to draft a player relative to ADP)
• Green (solid win, likely to have had opportunity to draft a player relative to ADP)
• Yellow (painful loss, prediction put owner in likely position to move the needle backward)
• Red (brutal loss, prediction put owner in an even more likely position to move needle backward)
Not all prediction wins are created equally. Some are dumb luck and have massive impact, which isn’t the sign of a good prediction, and other great predictions have smaller impacts but deserve more credit. If there was an uncontrollable event not tied to obvious injury risk, that’s probably not getting an assessment. We didn’t have a Gordon Hayward style injury from three seasons ago, but somebody like Zach Collins playing three games would be an obvious rank we skip the analysis on.
From there we want to look at the nature of injuries. Were they something that we could have known about? Were they factored into the draft situation as a risk-reward play? If a player got extremely lucky due to unforeseen injuries ahead of him, we’re not trying to reward or punish predictions as much as we would a prediction that’s based on known variables — one that reflects greater understanding of stat sets, usage rates and the like.
Mix that all up and then everything gets weighed out in context.
Each rank and evaluation is given the type of scrutiny you’d want to have if you could turn back time and do it all over again.
As we go further down in the draft, when player values start to bunch up, the grading loosens up a tiny bit and color grades won’t reward mild differences. At the same time a sleeper that can crawl up into early round value would get rated as a high impact.
Again, the key to this is to be brutally harsh with myself and give my competition benefit of the doubt when evaluating these predictions.
It’s entirely possible I have screwed up on a piece of logic in an example in an attempt to be expedient. I’m pretty sure any shifting results will be within a reasonable margin of error and not take away from the findings.
If you see anything hugely off, just let me know and I’ll make adjustments, but I doubt it’s going to matter.
The impact analysis seeks to determine whether the prediction put the drafter in the position for a gain, avoid a loss and to what degree — and then it aggregates that for the entire prediction set.
As for the impact analysis itself, it is also qualitative but it does trend toward ‘just the facts.’ There, we’re measuring how much distance was there between the predictions and the results.
That scale from 1-5 — it’s really just 1-4 as a grade of 5 is for Hall of Fame level needle-movers that occur maybe once in a season if they occur at all.
No players from last year received the fabled ‘5.’ Only one player got a 5 in the prior season and that was James Harden who nearly lapped the entire field in 8-cat. Kawhi Leonard got a 5 the season before that for being the worst fantasy pick of all time, perhaps, as he was a first round pick that nobody could even drop because he strung everybody out in a lost season.
Last year, picks like Stephen Curry, Karl-Anthony Towns, Kyrie Irving, De’Aaron Fox, Mike Conley, Otto Porter, Myles Turner, Josh Richardson, Marvin Bagley, and Wendell Carter blew holes in rosters, and a lot of those situations were tough-luck situations. Maybe you avoided Curry because of his injury history but if he doesn’t get hurt it’s possible he hits a Hall of Fame impact of ‘5’ last year. Towns was a durability stud and unless you faded him because he was due (we did), there was no reason to avoid him in drafts. Irving was a known risk and that’s why we stayed away, as was Porter, but these were all mostly fluky situations.
Still, because of the impacts of those decisions they pretty much got twos or threes for the Impact Rating across the board. The only four in this group was Curry because it was a massive loss any way you slice it.
On the flip side oh boy did we have some winners. Dennis Schroder, who graced the cover of The HoopBall Six, was my favorite pick heading into drafts as the fantasy world just absolutely snoozed on him, even during the year, as he racked up a top 60-65 season at a very late-round price — even in expert leagues. So yeah, he got a four.
Speaking of winners, the rest of the HoopBall Six was strong outside of a tinge of bad luck, headlined by Bam Adebayo, who finished with back-end first round value. A lot of folks thought he was overpriced in Round 3 and when you can get big returns on players who also cost a lot it feels better than the average planting of the flag. Brandon Clarke was all profit right off the bat and went strong until some injuries kept him from being a headliner here, and Buddy Hield and Taurean Prince even held value despite respective worst-case scenario situations. Marvin Bagley was the only bummer as injuries and Luke Walton struck all of the Kings, which caused his season to end before it even started. Honorable mention HB6er took over right away in Sacramento and if not for a late injury we’d also be talking about him inside of the top-50.
Around the rest of the rankings, James Harden got a four for another massive campaign. Fred VanVleet, who I was not anywhere near high enough on, got a four after cranking out a top-50 season from a mid-late round cost. Another guy I missed, Brandon Ingram, was a four as he went from one of the most heavily weighted inefficient guys in the league to the converse of that. Elsewhere in the win column we cleaned up on Chris Paul in one of the highest value early round pickups, and then late in drafts we got early-mid rounder Davis Bertans when he wasn’t even making the rankings. We hit big on Mikal Bridges, Zach LaVine of all people and Kawhi Leonard — who actually held up.
(And a few moments after publishing this I realized we had a massive win with Nerlens Noel and undercut myself on the 8-cat analysis.)
It certainly felt like this season there were more smaller needle-moving situations in the three range and less total jackpot/disasters in the four range. If you want to look at previous seasons feel free to dive down that rabbit hole if you’re crazy enough.
How do outcomes, big and small, either help or hurt a predictor in the ratings? After all, it’s only one prediction out of over 200. For drafters, it’s one of 13-16 picks in standard leagues against 9-11 other owners.
That’s where the impact analysis merges with the head-to-head ranking analysis to create a methodology for understanding how impactful the predictions were.
To tie this altogether I created a simple integer system associated with each of the aforementioned colors:
• Dark Green – Massive Win (+4)
• Green – Solid Win (+2)
• Yellow – Painful Loss (-2)
• Red – Brutal Loss (-4)
That, multiplied by the impact rating, is what I’ve found that can mix a results-based review with one that also takes care to measure the realities of the predictions being made.
I can pick a million holes in this system but what it’s essentially saying is a good or bad decision on these impactful players can be worth 2-4 or even 40 times more than (Kawhi two seasons ago) what your run of the mill ‘push’ on a player prediction is.
Most big, impactful predictions in which one site is really high on a guy and the other site is low – and something good or bad happens that is really impactful — the kind that puts all of your readers on one side of the line vs. the other … those are checking in at 10-40 times more impactful than a ‘push.’
That ranking ‘win’ or ‘loss’ isn’t moving the needle too much, but getting Dennis Schroder with one of your last picks — that bought you well over a half-draft of value.
You’d be doing great if six of your picks each etched away a round’s worth of value in your favor, but you got a guy that did that in one swoop. And Hoop Ball readers had him all over rosters, whereas fantasy GMs that lean toward Rotoworld didn’t have him in their top-200. Guess who had Schroder everywhere. You get the point.
So as we do this analysis, I mostly want to understand if the big needle movers were going in our favor.
Because the colors were often influenced by the reality of a prediction situation, there are cases when a color rating has been upgraded or downgraded to better reflect that tension when looking at the totality of an impact rating.
Guys that are hopping or costing 3-4 rounds or more as we get into the middle rounds are your 3s, and players that moved the needle for a few rounds are 2s.
It’s assumed that everybody understands that just because you ranked a guy highly doesn’t mean you’re drafting him way ahead of ADP.
So to put a bow on this, if a dark green prediction was made by one predictor (massive win, easily had opportunity to draft a player relative to ADP) and it had an impact of 4, that score would be:
Absolute Value of Dark Green (+4) * Color Impact (4) = 16
For somebody that made a bad prediction the formula would be the same except it would contain a negative integer for the Color Value and ultimately a negative number for the grade.
We total those numbers up and get a better sense for the weight of the wins and losses.
Again – this is all something that can surely be improved upon, but abstract analysis goes hand in hand with fantasy analysis just as much as the pure numbers do, so I like it.
See if you agree with the color ranks, the impact ratings and even the overall count.
In the end it looks like my predictions carried about 250 more rating points (Total Impact) than Rotoworld’s and we both crushed ADP. It weighted out as a score of 274-152 in 8-cat and 244-122 in 9-cat. If we filter out all of the twos to really look only at the movers and shakers (so threes and fours), we held a 184-96 edge in 8-cat and a whopping 162-58 in 9-cat.
***CLICK THE IMAGE TO CHECK IT OUT***
Also, for a link to last year’s B150 you can click here.
Thanks for reading this far. I can tell you that just like last year I haven’t felt this great about a season in a long time, so we should pummel these results.