AI Policy
Last updated: 3/22/2026
Previously: “Perplexed with Perplexity”, “But What Is It *Good* For?”, “Let’s Think Step-by-Step”
This is my AI policy — on the usage of large language models, diffusion models, and similar tools. First I have a set of guiding principles, then a set of practical commitments, and finally a set of references that I’ve consulted in writing this.
Principles
LLMs are primarily a cultural technology
This Gopnikist position is the core way I conceptualize large language models — that they are, primarily, a cultural and social technology that interpolate “all human writing ever, plus some reinforcement learning” to produce useful results. That doesn’t weigh on the ethics of using them, but it’s important context to keep in mind as you read the rest of these principles.
LLMs are genuinely useful
On a purely tactical level, LLMs are a useful tool to automate specific tasks, in a way that can’t be emulated by any other tool currently in existence; there are specific problems I’ve solved that would not have been worth my time to solve if not for LLMs.1 It’s certainly worth thinking through whether that automation is good for society or good for individuals or good for the environment — a worthwhile exercise for most forms of automation! — but I occasionally encounter Ed Zitron types that seem convinced that LLM users are simply deluding themselves with a “bullshit generator” and that it’s all a crypto-style grift. I simply can’t take that stance seriously; the interesting tension here is a genuinely useful tool that also comes with genuinely difficult ethical considerations.
Personal utility can outweigh contribution to societal problems.
Given a tool that is both useful and harmful, how should we decide whether and in what ways to use it? A major principle I’m using is the comparison between how much utility I personally get from the tool and to what extent my usage contributes to serious harm. If I use a gun to shoot someone for no reason and for no benefit to myself, that is pretty clearly unethical. If I use my laptop to make my living, but some component depends on a raw material that is being exploited by an armed guerrilla group to fund a civil war, I feel the usefulness of the laptop to my life can reasonably outweigh my contribution to a (distant, complex) harm.
Some would say that’s no excuse, and that we should abstain from any behavior that contributes to harm, no matter how indirectly or disproportionately. I am, alas, too pragmatic for that view. There is, as the saying goes, “no ethical consumption under capitalism,” only more or less ethical choices; modernity is simply too complex to expect more. I’ve chosen “personal utility > direct, localized harm” as my guidepost, and I merely hope to remain consistent with it.
Now let’s apply this to a few choices!
Let’s consider smoking. I do not and never would smoke. On the one hand, the utility is marginal — I enjoy stimulation, but not that much, and there are other widely-available stimulants that won’t give you lung cancer. On the other hand, some of the societal harms are very direct — secondhand smoke exposes the people around you to one of the worst carcinogens. Scale is more interesting; in a society where virtually everyone smokes, then the personal decision not to smoke is a near-meaningless “drop in the bucket”, but in a society where smoking rates are cratering, the decision not to smoke makes a proportionally larger impact on whether anyone is exposed to the harms of smoking.
Similarly, we can consider alcohol. In terms of personal utility, alcohol is fun — there’s a reason it’s sometimes called “social lubricant,” and that can’t be easily replicated by any other commonly-available substance. While it has some health impacts, a glass of red wine isn’t going to give me cancer and may even have some health benefits. In terms of harm, while drinking alcohol contributes to a culture that accepts a certain degree of drunk driving and alcoholism, I personally am not harming anyone else by consuming alcohol, and I do my best not to pressure those who’ve decided not to drink. In terms of scale, alcohol has a 10,000-year-long history and is present in virtually every human culture; I was a very serious teetotaler for 27 years of my life and had virtually no impact on alcohol use globally.
To my mind, LLMs as they currently exist sit right in the middle of these two examples2. As discussed in the previous section, agentic coding in particular is genuinely useful, as is using LLMs as a “calculator for words”. On the other hand, there are very real ethical concerns with the societal problems that these systems might introduce. However, those ethical concerns are attenuated by indirectness and scale:
- In terms of directness, many of the potential harms of LLM usage are vague and indirect. Will it displace entire professions? Will it harm the education system? Will it cause widespread LLM psychosis? Is it reliant on low-paid third-world labor? Does it encourage intellectual property theft (on which more below)? These are all valid concerns, but my own usage of LLMs doesn’t directly contribute to these issues. The most direct harm — and the one I’m most concerned about — is environmental cost. AI data centers are power-hungry and water-hungry and land-hungry. Based on my current understanding, I am comfortable with the environmental cost relative to utility, at least given all the other energy-hungry things (driving, international travel, food, fashion, …) we in the Global North do without really thinking about. But this is the most likely aspect to change my thinking — if I was given strong evidence that each query to Claude burnt down a square mile of Amazonian rainforest, I would probably stop using it regularly.
- In terms of scale, LLMs are extremely widely used. Famously, OpenAI claims ChatGPT was the fastest-growing consumer product of all time, and chat apps regularly top the iOS App Store. My on-and-off $20-a-month to Anthropic is technically directly contributing to proliferation of LLMs, but we’re talking about a company that’s raised billions of dollars and books hundreds of millions in revenue, and that’s just one of the major vendors. Whether or not it’s all a bubble or whether companies are simply forcing it down our throats is immaterial to this argument — the point is that my decision to use LLMs or not contributes a proportionally tiny amount to the ongoing harms.
This principle is probably my most important guide. I use agentic coding tools heavily, because they easily provide the most utility of any LLM application. I use LLMs for writing-adjacent tasks in moderation, because they provide some utility. I don’t use image, video, or audio generators, since I don’t see a use for any of those, and the harms (primarily, to artists’ livelihoods) are more direct.
Computers should aid human creativity, not replace it.
One of the major intellectual traditions I identify with is “computing as liberal art” – with “bicycles for the mind” and a calligraphy class inspiring the Mac’s typography and “humanistic computation” and Dynamicland and apps as a home-cooked meal and the html review and, yes, even “I wanna fuck my computer”. I’m not sure all those authors and technologists would even agree they’re part of a coherent tradition, but I’m fairly certain they would all agree that computation should aid human creativity, not replace it. Computers should be a tool for humans to use, not the other way around.
In particular, when it comes to my own creative practice, I’d argue that all writing contains communication, thought, and personality. Writing can never be “purely” functional — all writing is a way of thinking to oneself, communicating with others (including oneself in the future!), and expressing personality through voice, whether the writer intends it or not. In comparison, most code is much closer to being purely functional, even if it also contains some communication, thought, and personality.3 Using an LLM for any non-trivial amount of writing defeats the point of writing, insofar as you aren’t benefiting from the opportunity to communicate, think, or express yourself, in a way that it doesn’t for code.
On another note, skills are like muscles; if you don’t practice, you eventually lose the skill. I don’t want to lose my ability to reason about computational systems, and I definitely don’t want to lose my ability to write, but I don’t really mind forgetting some specifics about TypeScript syntax.
Intellectual property theft is irrelevant to usage
Many authors and most artists have a strong intuition that LLMs and diffusion models are fundamentally built on theft and therefore are unethical to build. I’m not sure I fully agree, but in any case, that doesn’t weigh on whether they’re unethical to use. (The multi-billion dollar model providers should have made it opt-in and compensated authors, but that’s between the companies and the authors.) The actual output of an LLM, unless very intentionally prompted to do so, doesn’t plagiarize anything specific from any specific author. Using it directly in place of my own writing still feels a little iffy, but I don’t do that, for the reasons explained in the last section.
Ironically, this critique feels more relevant to coding use cases, where the constrained space of outputs does make it more likely to plagiarize a particular piece of code and I do directly use the output of the LLM. However, coders don’t seem to mind as much, perhaps because of the strong open-source culture of the past few decades; when so much code is reuse anyway, the “plagiarism” of an LLM doesn’t feel that different.
Automation is a balance between reliability and consequences
Video essayist HGModernism had a recent video where she discussed the behavior of modern agentic coding tools. One useful concept she discusses is the balance between reliability and consequence of failure for automation. We should automate when the automated process is reliable and has low consequences for failure; we should still automate when the process is unreliable but the consequences of not automating are high. We should avoid automating, however, when the process is unreliable and there are severe consequences when it fails — but pressure to automate may still exist.
I like this framework to think through whether to automate with LLMs. I’m happy to “vibe code” (as in, not even look at the code) when I’m writing a shell script for my personal use, where the worst that can happen is that my shell is a little messed up. In a more professional setting, I want to review every line that goes into production. I trust LLMs for inconsequential queries or very basic idea generation, but for anything I want to reference later, I make sure to check the original citations.
LLMs probably aren’t conscious
I try to maintain my “farmer” skeptical mindset, so I am open to the idea that LLMs are phenomenally conscious, or that they’re a “genuinely new kind of entity”. But, frankly, I’ve always thought it’s just pareidolia — I assign a very low probability to LLMs being genuinely phenomenally conscious4, given their current architecture. So, on the one hand, I don’t account for Claude’s well-being in my usage (though I try to remain polite, occasional jokes about clankers aside, since it seems ethically corrosive to be needlessly rude to an entity that might be conscious). On the other hand, I don’t think it’s possible to have a “relationship” with an LLM in the way we can have with a dog or a child, let alone another adult human, and I find using LLMs as a therapist or parasocial relationship alarming.
Practices
So, given the above principles, what is my actual AI policy?
- I use agentic coding tools heavily, primarily Claude Code. I’m still experimenting with how independent I’m willing to let it work; I have vibe coded some projects before, for my personal use, but in a professional setting, I still hand-review (and often edit) every line.
- Every word I write (whether here or somewhere else) comes from my big wet noggin. I don’t use LLMs to write or rewrite.
- For software engineering, that includes documentation, PR descriptions, Slack messages, tickets in an issue tracker, and all other incidental writing. That generally also includes code comments unless it’s a project I explicitly call out as “vibe coded” (usually for my personal use, where it’s not worth ripping out or rewording all of Claude’s comments).
- I do use LLMs for light proofreading (of the “hey you forgot to finish this sentence” variety), finding words “on the tip of my tongue”, or suggesting synonyms. But in those cases, I never take more than a word or two of output, and I rarely do that more than once every few thousand words. I’m experimenting with using it for more substantive critiques, but I don’t use its wording directly.
- Generally speaking, I am willing to use LLMs as a “calculator for words”. I use LLMs for simple queries or calculations where the answer isn’t too important, like decoding laundry tags. I only occasionally use LLMs for more detailed research and, in those cases, rely heavily on the provided citations, using it as a “better Google search”. I remain suspicious of hallucinations and generally don’t trust LLM-generated summaries.
- I mostly avoid using LLMs for idea generation, except for very specific, low-importance tasks (e.g. I needed a couple Latin-sounding names following a specific pattern for a particular worldbuilding project, and Claude was able to come up with decent suggestions).
- I do not have any kind of parasocial relationship with LLMs. I don’t use them as therapists or life coaches and never ask for advice (unless it’s a straightforward, factual query). I don’t use them for any kind of journaling or introspection.
- I don’t currently use any video, audio, or image generation tools. I get the ick when I see a blog post headed by clearly-generated art. I strongly prefer using public domain art or my own images as key art, and I don’t see a need to integrate any video, audio, or image generation tools into my life. I did use image generators in the past, when their outputs were more primitive and, frankly, more interesting, and I’ve maintained those on this blog for historical reasons; and, my soon-to-be-former employer heavily uses video generation, which I have complicated feelings about working on.
- On the other hand, I don’t automatically hold it against anyone that does want to use generative AI in their artistic practice; finding creative ways to (mis)use the tools available is what being an artist is all about. In practice, however, I have yet to see this happen in an interesting way.
- I’m comfortable paying for LLMs in moderation. That said, I prefer using vendors that seem more ethical or thoughtful (not that I would describe any as perfectly thoughtful or aligned with my values, but more thoughtful than others), which these days means Anthropic.
References
- “Measured AI” by Gina Trapani, which was an invaluable reference that largely aligns with my own stance
- “When AI Gives You the Ick”, from Splash Literary, which inspired me to write up my principles, as well as Alan Jacobs’ statement of principles
- “What do coders do after AI?” by Anil Dash and “Grief and the AI Split” by Les Orchard, which go a long way towards explaining why software engineers are much more excited about LLMs than other professions
- “When lying is the best strategy for AI”, a video essay by HGModernism, which is interesting throughout, but I’m specifically including here for her introduction of a reliability-versus-consequence-of-failure framework for thinking through automation; “Flood fill vs. the magic circle” also discusses the pressure to automate
- “I HOPE YOU DON’T USE GENERATIVE AI” by Ruby Morgan Voigt (maker of delphitools), expressing their stance on using generative AI as a primarily-designer-sometime-programmer; I don’t think it’s fully philosophically thought-through, but I do appreciate hearing from someone using LLMs to build something that is genuinely useful
- “AI makes the humanities more important, but also a lot weirder” by historian Benjamin Breen, reflecting on the pros and cons of LLMs for historical research; somewhat orthogonal to this document, but nevertheless interesting
- “Large AI models are cultural and social technologies”, the founding document of Gopnikism; see also Gopnik’s own talk and “Cosma Shalizi Is Aware of All Internet Traditions”
Changelog
- 2026-03-20: Initial publication
Footnotes
-
Recently I had Claude Code adapt fzf-git for use with jujutsu. That probably would have taken at least two or three hours of my time before LLMs, which I would have struggled to justify; with Claude Code, it took about a minute to write the prompt, and when I came back from lunch it was ready for use. ↩
-
Two more examples! First, fast fashion. The critiques of the lower end of the fashion market are very well known, and I try my best to avoid the worst of the worst of fast fashion. But fashion is a world-spanning industry with extremely complex harms, and at the end of the day having dry socks provides so much utility that it doesn’t feel worthwhile trying to figure out whether Darn Tough is really the best, most ethical choice or whether I should be knitting my own socks from scratch. Secondly, crypto. I’d agree that the harms of crypto are equally as “distant” as the harms of LLMs. But, unlike LLMs, I never understood the intended utility of crypto, especially on a personal level. So, other than some very light poking around to keep up (as in, I followed a single “getting started with Ethereum” tutorial once), I never had anything to do with crypto. ↩
-
Another way of thinking about this is to consider the variance in expression. If you tell a dozen people to write their own version of a short argument, you’ll probably get a dozen very different expressions, even if the core argument is the same. On the other hand, if you ask a dozen people to write the same JavaScript function, at least half are likely to be very similar, if not identical; code simply isn’t that expressive! ↩
-
Or, rather, given my slightly wonky illusionist beliefs about consciousness, it might be more appropriate to say that I assign a high degree of probability to LLMs being minimally conscious (certainly no more than, say, an ant). ↩