What Authors Can Learn From the Latest Hugo Awards/Worldcon Controversy

11 hours ago 5

For Authors

What happens when one of the most respected institutions in sci-fi turns to AI to vet its potential panelists? You get another Hugo Awards and Worldcon controversy, one that has once again set the science fiction community ablaze. This time, the uproar could have been easily avoided if organizers had simply paid more attention to the audience they were supposed to serve.

In this week’s blog, Ginger reflects on his own past missteps with AI (on this very topic) while exploring why the technology remains such a divisive issue among authors. Focusing on Worldcon’s latest blunder, he takes us through what went wrong, why it matters, and how writers can avoid making similar mistakes in their own careers. By understanding your readers, being transparent, and using tech responsibly, you can turn this year’s mess into a valuable lesson in building lasting trust with your audience. Even if Worldcon doesn’t learn from its errors, the rest of us sure can. 


Confession time.

A while back, I wrote an article about the Hugo Awards that, let’s just say, didn’t exactly win me a Nebula for accuracy. I got some facts wrong—like claiming Chengdu hosted Worldcon twice when it was actually Asia’s second go—and I caught some well-deserved flak from Hugo winners, nominees, and Worldcon folks. In fact, I ended up re-writing the entire article, as well I should, given how wrong I’d been while simultaneously questioning the awards’ credibility in a way that dissed the hard work of every writer who’s ever clutched that shiny rocket. 

But what I didn’t tell anybody at the time was why I’d made those mistakes. In researching the article last year, I’d become a little confused by all the details and thought I’d be really clever by asking ChatGPT to summarize the timeline of events leading to last year’s Hugo Awards controversy. I didn’t ask ChatGPT to write the article for me, but I did use it for research.

And what I hadn’t realized until after I’d clicked “publish” is that ChatGPT hadn’t been accurate in that research. It had made a bunch of stupid mistakes and I’d ended up parroting them in the article itself, because I hadn’t verified my facts.

Look, I’m not proud of myself. I’m ashamed. I can promise you that it was a one time thing, a moment of weakness, but I’m still deeply ashamed of my mistake, and I’m actually very glad I got called out for it so I could stop before I went too far down the AI rabbit hole with my writing.

Honestly, it was a wake-up call to how so-called Artificial Intelligence still isn’t quite ready for prime time yet, even though the technology has advanced massively since even last year. 

I’ve conscientiously avoided using AI for research ever since, and even as the technology becomes more widespread in medicine, technology, and law, I’ve noticed how advocates keep boasting about its increasing levels of accuracy (80% here… 90% there) despite the fact that even the best AI still tends to be wrong more than one time in ten.

So, why am I telling you this?

Well, I’d have quite happily kept my one-time exploration into AI a secret, if it wasn’t for the Hugo Awards hitting the headlines once again, and this time for a similar screw-up. I wanted to write about what happened, but I couldn’t do so without also confessing my own transgression. Not only would that have been unethical, but it would have skipped some important context which makes this new story especially relevant to me.

So, please feel free to lay into me in the comments section below. I deserve it.

But with that confession out of the way, let’s dive into the latest Hugo drama and figure out what it means for self-published authors like you and me.

The Background

Worldcon is an annual event during which The Hugo Awards (the Oscars of sci-fi and fantasy writing that have crowned giants like Frank Herbert and J.K. Rowling) are awarded.

In 2025, the event is set to take place in Seattle, where guests of honor like Martha Wells, Donato Giancola, and Bridget Landry will celebrate the best of science fiction and fantasy writing for the 83rd year running. As always, the event is set to include hundreds of hours of panel programming, presentations, workshops, and table talks featuring dozens of writers, sci-fi and fantasy fans, influencers, and creators.

But this year, the Worldcon crew were feeling a lot of extra heat about the event. Not only was there the previous year’s controversy to deal with, but America in general has become a very sensitive place when it comes to discussing difficult issues. Right now, peaceful protestors in American universities are getting pepper-sprayed, beaten, and deported for their political opinions—so, understandably, the organizers of Worldcon hoped to avoid too much negative attention by screening potential panelists ahead of time.

But with over 1,300 panelist applicants, that’s a lot of screening to do! So, according to a statement by Worldcon Chair Kathy Bond on April 30, they fed the applicants’ names into ChatGPT to speed up searches of their social media accounts and online presence, with humans supposedly double-checking the results. 

Sounds like a clever shortcut, right? Like using Grammarly to catch your typos. Except, when the sci-fi community caught wind, it was like someone flipped the self-destruct switch on a starship. 

Sci-fi fans generally loathe generative AI with a passion that could power a warp drive. They’ve been one of the loudest voices in slamming the technology for ripping off artists, spitting out generic drivel, and burning energy like a dragon with indigestion. 

Plus, as my confession confirms, AI is notorious for coughing up biased or downright false info, which means it could have been blocking potential panelists for no good reason, or giving others a free pass when they actually had opinions that might prove problematic. 

When this came to light, social media platforms—Bluesky, Facebook, X—exploded with fans and authors demanding resignations and boycotts. Heavyweights like Elizabeth Bear and Fran Wilde bailed from panels. Yoon Ha Lee, whose YA novel Moonstorm was a Lodestar Award finalist, pulled his book from contention. That’s huge, like the sci-fi and fantasy equivalent of turning down an Oscar nod. 

Then, on May 5, three big shots—Hugo Administrator Nicholas Whyte, Deputy Administrator Esther MacCallum-Stewart, and WSFS Division Head Cassidy—all quit, basically saying, “We’re done.” Yikes!

The rage wasn’t just about AI. It was about ignoring how many writers and creators feel about AI.

Jake Casella Brookins, from the Hugo-nominated Ancillary Review of Books, nailed it in his open letter: Worldcon’s leadership seemed blind to the community’s hatred of AI. This isn’t some side debate—it’s a roaring argument at cons, on blogs, and in X threads. Brookins called it a “disconnect,” like a writer ignoring their beta readers and wondering why their book then flops on publication day. 

Author Jasmine Gower also flagged that feeding names into ChatGPT might’ve breached Worldcon’s Privacy Policy, which is like leaving your manuscript on a park bench. She snagged a membership refund, and I bet she’s not alone.

Bond tried to clean up the mess with a May 2 apology, admitting her first statement was “flawed” and promising to redo vetting without AI, plus an external audit. Solid move, but too late. Worldcon 2025 has a scarlet letter now, and Brookins warned it might be remembered as “the Worldcon where panelists were picked by the racist plagiarism machine.” Ouch. 

Now, last time, I felt entitled to give Worldcon and the Hugo Awards a hard time for the controversy that arose. This time, however, the whole story made me uncomfortably aware of how I’d let ChatGPT damage my own credibility. That’s why I thought it was a good idea to look at the three lessons we can learn from what went wrong, rather than just dogpiling on an organization that remains deeply committed to celebrating the world of authors just like you and me.

Lesson 1: Know Your Readers As Well As Your Main Characters

This is an important one! And, again, a lesson I learned the hard way. You need to know your audience well enough to know when to tread carefully around certain topics. 

For example, in my steamy romance novels, I tend to attract a mostly-female audience with a slightly conservative political bias. Therefore, I can write about issues that are important to both of us (like the treatment of veterans, and government corruption) but I’ve learned to avoid topics which might alienate them, like criticizing certain political figures.

Likewise, the Worldcon crew should have remembered that sci-fi fans are a tight-knit crew with strong opinions, and AI’s a dealbreaker for many of them. Instead, Worldcon ignored that, and soon learned what a mistake that can be! So, stay tuned to your readers, respect their beliefs, and hopefully you can avoid your own Bluesky pile-on.

Lesson 2: Be Honest, Like You’re Pitching to an Editor

Worldcon’s first statement was like a rough draft that needed a red pen, dodging privacy worries and brushing aside the AI backlash. Bond’s second try was better, but by that time, the trust was gone. As an indie author, you’re your own marketing team, so own your mistakes fast. If you screw up—by saying something insensitive, or backing the wrong side in a political debate—post about it on your blog or social media and explain where you’re coming from. You don’t have to compromise your own beliefs, and it might not even fix the issue if you do, but a genuine statement will at least show that you respect your audience enough to take responsibility for your words and actions. Transparency is your shield; wield it, and your readers will stick with you even if they don’t agree with you.

Lesson 3: Use Tech Like a Sidekick, Not a Star

Worldcon leaned on ChatGPT to save time, and I get it, vetting the social media handles of 1,300 people is an unthinkable amount of work. But they didn’t account for AI’s baggage. Tech’s a lifesaver for self-published authors, such as keyword tools or Shopify analytics (I’ve used Grok for that). But for creative stuff? Tread carefully. Readers can sniff out inauthentic writing quicker than a plot hole. If you have to use it, limit AI for grunt work, not your story’s soul. Your voice, your heart—that’s what makes your book pop in a crowded Kindle store, and that’s the kind of magic AI will never be able to replace.

Conclusion

So, are the Hugo Awards toast? Hopefully not. 

Sure, I’ve doubted them before, but I was wrong, they’ve always bounced back.

The Hugo Awards survived the 2015 Sad Puppies, who nominated gems like Space Raptor Butt Invasion for the top prizes, and they also weathered the 2023 Chengdu mess, during which Neil Gaiman got sidelined for political reasons. 

Yes, this AI snafu is another bruise on an already tender reputation, but the Hugo Awards themselves have at least avoided the worst of it. Whyte and crew swore no AI was used for anything else to do with the awards process and Worldcon 2025 is still rolling, with the Hugo Awards set to dazzle on August 16, 2025. 

The show, as I keep learning, goes on.

Now, it’s all about rebuilding trust—and trust is something you should be building with your own audience. Know your readers, be transparent, and use tech smartly. 

But, that’s just my opinion. Do you have your own thoughts on what happened with Worldcon this year? Do you think the Hugo Awards has damaged their reputation beyond restoration? Drop them in the comments, I’d love to hear your take!

Share this blog

About the Author

Our Hidden Gems guest author for today.

Ginger is also known as Roland Hulme - a digital Don Draper with a Hemingway complex. Under a penname, he's sold 65,000+ copies of his romance novels, and reached more than 320,000 readers through Kindle Unlimited - using his background in marketing, advertising, and social media to reach an ever-expanding audience. 

Read Entire Article