I Vibe Coded My Election Visualizations
One of my old colleagues, Dan, challenged me to try “vibe coding”, to create software using only about $20 worth of AI services. I signed up for Claude Pro, Anthropic’s premium plan, and I asked an AI development assistant to make detailed election maps for me. Its results look and feel impressive, though they required a lot of refinement, and they blew through my session usage limits quickly.
Since late 2019, I’ve been visualizing elections in King County, Washington, using data that shows votes in each of thousands of the county’s precincts, including over 1,000 in Seattle. It started as a way for me to improve my Tableau calculation and mapping skills, and it’s connected me with campaigns and journalists whose work I can support directly. At this point, I typically reuse the same Tableau workbook from one election to the next, swapping in updated precinct shapefiles and election results, which the county provides in standard formats. The most time-consuming process is going through every race, of which there can be hundreds in an election, to ensure that candidates have contrasting, appropriate colors and that all the names are encoded correctly. Tableau Public kindly hosts all my visualizations for free, but as Tableau has been absorbed into Salesforce and as its strategy and personnel have changed significantly, I’ve wanted to build visualizations in other ways. During my most recent job working on open source Jupyter projects, I built a Jupyter notebook to convert shapefiles and results into interactive visualizations, but they didn’t look as good as Tableau’s output did, and the resulting files were so large that they could have become a financial burden for me to host. I decided to see if Claude Code could build some maps for me.
I responded to my friend’s challenge by plunking down $20, plus tax, for one month of Claude Pro, and installing Claude Code on my aging MacBook Air. My first attempt to create a visualization led Claude to build an app that tried to download files directly from county web sites, which didn’t display any usable map, and which wasn’t fixed with any amount of follow-up prompting. I instead gave Claude Code access to copies of the county’s shapefiles and results files that were already on my hard drive, and I asked it to produce static output that I could host using an Amazon S3 bucket and Amazon CloudFront as a distribution network. It asked me a few follow-up questions, like a good job interview candidate, before displaying a series of progress messages and animations. It explained what it was doing as it took over 7 minutes to produce code for me. Less than 30 minutes after I installed Claude Code, I had a web page that displayed election results in a color-coded, readable, mobile-friendly way. This was great progress, so I decided to think bigger, and this required a lot of follow-up work with Claude Code.
I’ve heard a lot about AI coding assistants as a disruptive technology, particularly for companies that are used to hiring junior developers. In my experience, Claude Code behaved like a friendly, obedient junior developer with a very restrictive work schedule. With the Claude Pro plan, I have session usage quotas that reset every 5 hours or so, and weekly usage quotas as well. After just four sessions, I had already used a quarter of my weekly quota, and one of those sessions ended abruptly, because I had reached my usage limit. Although Claude Code lets me queue up multiple prompts, it’s not really conducive to the “flow” state that’s essential to productivity. Even simple actions, like find-and-replace operations that modern text editors can do near-instantaneously, took 10 seconds or longer in Claude Code, distracting me from more complicated tasks. Especially when we were getting started, Claude asked me a lot of follow-up questions out of an abundance of caution; unlike a junior developer, it didn’t seem to learn from my responses. I made a few explicit suggestions, such as to use U.S. English spelling, that were saved for later. I was also given the option to be asked to approve every file change and script run that Claude Code chose to make, or to give AI free rein to do whatever it deemed necessary, with seemingly total access to all my personal files. Unlike a human developer, Claude has no accountability for its actions; had it done something destructive, I’d have to rely on my hard drive backups to bail me out. If I needed to rely on Claude Code as a full-time employee, I’d almost certainly need to upgrade to Anthropic’s “Max” plans, which start at $100, or I’d need to pay for even more usage on top of that. As with all hosted software providers, Anthropic could lower my usage limits, suspend my account, or raise their prices at any time; in other words, my vibe coding assistant could demand — and receive — a raise at will.
Claude Code works fast, but it doesn’t think big, and it often makes errors, especially as a front-end developer with no eyes and no fingers. Particularly on mobile web browsers, my election visuals sometimes had elements that were too small to see or touch, that appeared overlapping each other, or that had nonsensical yet correctly spelled labels. It chose green and red for ballot questions that were answered with “Yes” or “No”, unaware that this is a bad visualization practice, making the map unreadable for people with red-green colorblindness, which includes up to 8% of men and 0.5% of women. (To its credit, Claude Code included some accessibility markup in its output.) Its HTML output included scripts and styles inline, without using semantic markup or class names much. This sets a bad example: Claude Code can edit its own code just fine, but when I looked at it, I didn’t like what I saw, and Claude Code only improved its style when I explicitly told it to. Sometimes, when I asked it to solve one problem, then a second problem, the first problem reappeared. As part of this experiment, I delegated to Claude Code entirely, but in a production environment, an operator really needs to blend AI-driven coding with some well-informed guidance and knowledge of best practices.
I’ve used a few different AI tools for coding assistance, including developer extensions like Jupyter AI, which my team at AWS built and still maintains. They can produce code so quickly, they impress and sometimes scare me. They produce output that superficially does the job requested, but that falls apart in the face of scaling questions, human usability, and security challenges. They do not provide any accountability or enough insight into new and complex problems; they can produce output that doesn’t answer the question they were asked. They make fantastic prototyping and experimentation tools, and as an experienced developer, I think they do a great job of automating the more boring and repetitive tasks that other people have already solved. I also appreciate that once my one-month Claude Pro subscription ends, I can keep using Claude Code with Anthropic’s models hosted by Amazon Bedrock, which charges me a metered rate based on usage, without the same quotas as Anthropic’s own Claude Pro plan. Perhaps, when computer prices normalize again, I could host my own AI models locally. The future’s looking bright and productive, whether or not I have a job in it.
At publication time, I owned shares of Amazon and Salesforce; the latter company owns Tableau. None of this article about using large language models (LLMs) was created or edited using an LLM. All em dashes were authored by hand.