March Madness is upon us. Do you know what that means?

It’s time to fill out your bracket — and figure out what metric(s) will guide your choices this year.

Will you pick based on seeds? Mascots? School colors? Vibes?

What about KenPom rankings? Personal connections? Intense but inexplicable grudges?

Here at the Deseret News, we decided to bypass these common options and turn to ChatGPT for help.

Using ChatGPT to build a March Madness bracket

There are multiple ways to use ChatGPT to build your bracket.

  • You can ask it for ideas on how to randomize your picks.
  • You can ask it to fill out a bracket without offering it any parameters or guidance.
  • You can ask it to analyze your loved ones’ brackets and then create a unique one for you while writing a limerick about why their choices are bad.
  • You can ask it to offer you emotional support and encouragement as you combat decision fatigue.
  • You can ask it what it needs — season stats? seeding information? — to build the best bracket possible and then feed it that information.

We chose the fifth option and worked in tandem with ChatGPT.

Here are the steps we took:

  • We asked ChatGPT to tell us what information it would like to use to make a bracket.
  • It responded with a long list of requests: seed rankings; team records, including conference records; recent performance trends; season schedule information; injury reports; defensive and offensive efficiency; tournament experience; style of play details; and game location.
  • Since we didn’t have time to help ChatGPT find all that information, we asked it to narrow its list down to the three most predictive metrics.
  • ChatGPT settled on defensive and offensive efficiency, recent performance trends, and seed rankings. (Yes, that’s technically four, but defensive and offensive efficiency are both easy to get from KenPom.)
  • We gathered that information from the web and collected it in a spreadsheet. Then, we uploaded the spreadsheet to ChatGPT.
  • At this point, ChatGPT was finally ready to make its picks. We walked it through the matchups, occasionally helping it resolve spreadsheet-related confusion.

To be clear, we would have made many — if not all — of the same educated guesses if we worked from the spreadsheet instead of feeding the spreadsheet into ChatGPT. But ChatGPT could crunch the numbers and make comparisons much faster than us.

What upsets will happen?

Although ChatGPT considered tournament seeding when making its decisions, it still chose several upsets based on the other predictive metrics it used.

For the most part, we accepted these picks without comment. After all, upsets happen every year and they’re a key part of what makes March Madness fun.

But we did push back on some particularly surprising decisions, including ChatGPT’s conclusion that Howard would beat North Carolina, a No. 1 seed, after winning its play-in game.

“Why did you choose Howard over North Carolina?” we asked.

ChatGPT bowed under the pressure, acknowledging that the pick was probably a mistake.

At other points, ChatGPT defended its upset pick. When that happened, we let the choice stand.

Here are ChatGPT’s upsets from the first round:

  • No. 12 seed UAB over No. 5 seed San Diego State.
  • No. 10 seed Drake over No. 7 seed Washington State.
  • No. 15 seed South Dakota State over No. 2 seed Iowa State.
  • No. 9 seed Michigan State over No. 8 seed Mississippi State.
  • No. 12 seed Grand Canyon over No. 5 seed Saint Mary’s.
  • No. 9 seed Texas A&M over No. 8 seed Nebraska.
  • No. 11 seed Oregon over No. 6 seed South Carolina.
  • No. 10 seed Colorado State over No. 7 seed Texas.

Thanks to some other upset picks, ChatGPT concluded that two No. 3 seeds — Illinois and Kentucky — will make it to the final.

ChatGPT March Madness bracket

Without further ado, here’s the full bracket from ChatGPT.

You’ll notice that we left it blank where you’re supposed to guess the total points in the final, the metric that’s often used as a tiebreaker in bracket competitions.

We failed to make a guess because that’s when we hit ChatGPT’s hourly limit for questions. I guess we can’t resent the system for making us figure out one thing on our own.