Bookie: Structuring data from an LLM's latent space
Can we take advantage of the omniscience of AI to create useful (though flawed) novel datasets? Bookie is a modern web app and experiment in using LLMs to synthesize a multidimensional "book landscape" that supports syntopic reading. It's a great tool for learning more about the book you're reading and for putting the next one on your list, and it's a showcase of automated content generation.
Bookie in Action
So long as a well known title is given (e.g. *War and Peace*), Bookie can do a very good job without any grounding or tool-calling abilities.

Introduction
I built Bookie after noticing how often I was asking ChatGPT/Claude for book context and adjacent recommendations using the same handful of prompt templates. I wanted a faster, more structured way to (a) pull rich, contextual notes about whatever I'm reading and (b) situate that book inside the broader intellectual and publishing landscape along various quantitative and qualitative dimensions (e.g., publication date, accessibility, etc.).
The idea was inspired by many conversations about consumer products and content-creation in the LLM space, and the crucial design angle of breaking down complex entities into many dimensions and manipulating them in that space is a through-line with many of my other projects.
So what is Bookie
What it does: Bookie is a deliberately hallucination-driven encyclopedia and "book comparer." You type a title, and the app (via the OpenAI API) responds without doing web search or any tool callin; the LLM only has its own trained and tuned weights. It returns a rich page on the book plus several comparison titles. Each book is scored across many dimensions, and you can navigate a graphical plot to hop from one node (book) to another and keep exploring the "book landscape". It's currently hosted (here) so anyone can use it.
Why not just talk to a chatbot: One topic that arose during development was: why would a user adopt a constrained GUI that is almost solely an LLM prompt-wrapper, instead of just having a conversation with the LLM directly and bespokely? Fair question, and it helped me to crystallize an insight about tools, technology, and app development: apps are opinionated workflows. Bookie packages my recurring prompts, normalizes the dimensions, visualizes the space, and turns a multi-step ad-hoc process into a one-click flow. Yes, a power user could replicate it manually, but if their use case is close enough to the one I designed into Bookie, they would be wasting effort.
(I've always said that if someone had built an Anki Sentence Suggester, I would have just used it instead of having spent months building my own.)
Wrap Up
Bookie started as a two-to-four-week detour from other work (the Anki Sentence Suggester project) and became a tool I still use. There are many possible improvements, especially now that APIs expose research tools; with GPT-5-era capabilities, I could turn on web search for more grounded outputs when desired.
But there is usefulness as a demo in the pure synthetic design: it shows --- impressively --- the extent of LLM "knowledge", and it gives me a quick, navigable map of adjacent reading. It's an example of a broader pattern I care about: using LLMs to help carve structure into messy spaces and then exploring the result.