I Predicted 24 NBA Games with AI: Starting from 0% Perfect Scores

I Used AI to Predict 24 NBA Games: Starting with 0% Exact Score Predictions

Draft Date: 2026-05-06 · Publish Date: 2026-05-12
Suggested Category: Post-mortem / Sports (not default Web3)


Origin: Setting the Tone with One Sentence

Over the past month, I've used walker-learn.xyz/predictions/ as a public AI prediction sandbox, posting predictions daily and filling in scores after each game. Out of 24 NBA games, there were 0 exact score predictions, and 13 correct outcome predictions (54.2%). This article is for those curious about what AI can actually do in sports prediction, a real data post-mortem without marketing jargon.


How the Data is Calculated

When each prediction is published, the following are recorded: home team, away team, start time, and the predicted score given by the model. After the game, actual scores are pulled from api-football and automatically settled into three categories:

  • Exact Score: Predicted score = Actual score (rare)
  • Correct Outcome: The winning/losing direction was correct, but the score was wrong (main category for NBA)
  • Miss: Both the winning/losing direction and score were wrong

The detailed methodology is described in the 'How We Predict' section of /predictions/.


The Truth About NBA: Exact Score Prediction is Not a Reasonable Goal

In soccer, low scores like 1-0 or 2-1 are common, and the probability of AI guessing them isn't particularly low (9.9% overall across the site). For an NBA game with a score like 110 to 105, the combination space is dozens of times larger than soccer – theoretically, AI could occasionally hit it, but in reality, I hit 0 out of 24 games. The conclusion is simple: for NBA, just look at the outcome, not the exact score.

This is why the hero block on /predictions/nba/ openly displays 'Exact Score Rate 0%, Win/Draw/Loss Hit Rate 54.2%' – the ceiling for AI's ability to predict high scores is right here.


Is 54.2% Win/Loss Rate High or Low?

Based on long-term data from the NBA public market (Vegas line), blindly betting on away wins is about 42%, and blindly betting on home wins is about 58%. A simple baseline of 'home win = win' is 58% accurate. My 54.2% is actually slightly lower than this mindless baseline (about 4 percentage points less).

One interpretation is that the sample size is too small (24 games, the first full month). Another, more honest interpretation: the current prompt insufficiently models NBA's 'home court motivation / end-of-season form,' and is instead dragged down by the baseline anchor. Regardless of the interpretation, this AI prediction system currently does not outperform a strategy calculable with a single line of SQL – and I won't wait until the 'data looks good' to say so.

Furthermore, betting companies take about ~5% vigorish (juice), so to break even, a hit rate of at least 52.4% is needed (for -110 odds). Based on current hit rates, the possibility of AI simply winning against betting companies is non-existent. If someone tells you they can consistently win at this, you should not believe them.

My reason for doing this isn't to win money; it's to see the limits of AI in the 'public data + structured analysis' workflow. Compared to buying courses, consulting experts, or looking at 'inside information' on social media, the act of publicly admitting the correctness or incorrectness of each prediction and making it available for scrutiny is inherently more valuable than the hit rate itself.


A Few Interesting Cases

24 games isn't a lot, but it's enough to identify several typical patterns of AI failures. Each entry includes the original text, and all scores and analyses are publicly verifiable.

  1. 1-Point Close Win, AI Predicted Opposite Direction +8 Points – Knicks vs. Hawks (4/21). AI predicted a Knicks home win by 8 points (109-101), but the actual score was 106-107, a 1-point away win for the Hawks. 1-point games are common in the NBA (many go to overtime), and AI missed both the direction and the margin, a classic double miss.

  2. 33-Point Swing Blowout – Trail Blazers vs. Spurs (4/26). AI predicted a home win by 12 points (118-106), but the home team actually lost by 21 points (93-114). A 33-point error, AI completely missed the Spurs' form that night.

  3. Perfect Spread Prediction – Score Incorrect but Spread Perfect – Spurs vs. Trail Blazers (4/20). AI predicted 118-105 (spread +13), actual was 111-98 (spread +13). The score wasn't exact, but the margin was precisely hit. This was the cleanest instance of "methodology working" out of 24 games.

  4. Lakers Crushed Suns by 28 Points – Lakers vs. Suns (4/11). AI predicted a Lakers win by 7 points (115-108), but the Lakers actually won by 28 points (101-73). The Suns scored only 73 points for the entire game (league average 113), suggesting AI missed the Suns' injury or rest arrangements that night – precisely the kind of private information a prompt cannot access.

  5. Two Games Where AI Predicted Away Wins, But Home Teams Dominated Instead – Magic vs. Pistons (4/25) + Rockets vs. Lakers (4/27). In both games, AI predicted an away win, but the home teams dominated instead (home wins by 8 / 19 points). This is the most consistent systemic bias of the AI in the data: underfitting of home court motivation.

Each entry can be traced from /predictions/nba/ to the next; I haven't hidden any results after winning a bet – all settlement results are marked with _match_settled and placed in the sitemap.


What I Learned

  1. Score prediction is not the product direction: Having AI bet on six-digit combinations like 110-105 is a meaningless illusion of precision. It should shift towards probability distribution or issues with probability margins like over/under.
  2. Prompt engineering is not monotonically increasing: Switching to more complex prompts doesn't necessarily improve accuracy. Early on, my hit rate wasn't low with minimalist prompts, and adding a bunch of player props didn't significantly boost it either. Now, I'm running A/B tests to choose between prompts.
  3. Publicly admitting missed predictions is more important than boasting about hits: Weekly post-mortems run automatically, feeding the reasons for a drop in hit rate back into prompt feedback. If this isn't enforced publicly, model iteration can become a selective narrative.

Next Steps

If you're also experimenting with AI + public data, the hit rate isn't the most important thing; publicly displaying the hit rate on the homepage is.


Disclaimer: This article does not constitute any form of betting advice. Please comply with local laws and regulations.

主题测试文章,只做测试使用。发布者:Walker,转转请注明出处:https://walker-learn.xyz/predictions/recap-ai-nba-24-games

(0)
Walker的头像Walker
上一篇 Mar 10, 2026 00:00
下一篇 May 5, 2026 17:07

Related Posts

EN
简体中文 繁體中文 English