In the AI world there is a debate swirling about how much AI providers should censor their image generation. Of course there are plenty of things to mock in past attempts to censor or otherwise put a thumb on the scale of AI to be more socially appropriate. Exhibit A of course were the racially diverse, Black SS stormtroopers created by Google Gemini, but anybody who’s spent a decent amount of time using AI has run into these guardrails, and sometimes they can be annoying. I had a tragicomical experience myself in the early days of Midjourney when they didn’t have the fingers right, and when I tried to create a picture of Adam and Eve it gave Adam multiple genitalia. I tried to regenerate the image specifying “no nudity,” and got a warning that I was using a forbidden term and would be banned if I continued to try to create nude images.
The guardrails around religious topics in particular are so strict that it becomes difficult to do anything religious per se, one has to describe a religious scene without invoking religious vocabulary. (I assume the skittishness about depicting religious imagery is really just about depicting Mohammad, but they’re trying to be consistent).
However, in the past week or so the world was exposed to an almost completely uncensored AI tool with the release of Elon Musk’s Grok 2 (because of course it’s Elon). All of the sudden the Internet was flooded with images of Donald Trump flying an airplane into the Twin Towers, Donald Trump and Kamala Harris making out, etc. I would be lying if I said that last one did not make me laugh out loud, and there was a treasure trove of meme material.
However, as a faith that unapologetically has our own sacred cows, we obviously know where this is going. On the whole, all silliness aside, I think it’s probably better for a corporation to put guardrails in place, to say “hey, that picture you want to generate isn’t cool; you’re clearly being a jerk to a particular group of people (above and beyond good-faith jabs we all enjoy), so you can’t use our tools to do what you’re trying to do.” Of course, open source is developing so quickly that, big corporations or not, the censorship debate is kind of a moot point. (Matter of fact Flux, the state-of-the-art engine driving Twitter’s image generation, is now technically open source, it just takes a lot more computing power than most people have to run it). Whether we like it or not, the extreme libertarians are going to get this wish, and virtually every photorealistic scenario (and, in the future, video) can and will be easily generated. And I mean every one; there is an ongoing legal/ethical debate about the status of computer generated child pornography for pedophiles, for example. Nowadays it’s hard to get a law passed based on “it’s just wrong;” you have to show harm. Proponents say artificially generated child pornography is “victimless,” since no actual child is being abused, but whether pornography leads to more offending hinges on the social scientific question of whether pornography is substitutionary for the real thing, thus reducing demand, or whether it entices behavior, and my understanding is that the science is unsettled on that question.
But back to our sacred cows. If there is a way to disparage what somebody holds sacred, not only will people do it in an attempt to become some kind of an edgelord, but they see it as a moral act (often while sitting in their mother’s basement). Of course, this has always been with us, whether with those people smearing garments with dirt at general conference or the adult film producer who shot a scene in a temple locker room.
You would think it goes without saying, but people who engage in such behaviors are objective ******** (I’ll let you count the asterisks, TS is PG) by any reasonable ethical framework. Even if I left the Church and hated Joseph Smith with every fiber of my being, I would think that intentionally mocking a temple ritual (or a Catholic Mass or an Islamic prayer) that good people find solace in is being an *******, regardless of background.
However, with open source AI image generation all sorts of creatively desecratory images of our sacred rituals are on the horizon. (Of course, to paraphrase DeMille, you can’t desecrate the temple, you can only desecrate yourself.) So something to be aware of from those who “leave the Church but can’t leave it alone.” (A category which does not include most ex-members, but enough).
Meanwhile out in open source land (or really, open weight land), Flux has undergone so much tinkering and refining that it’s impossible to keep up. It was runnable on consumer hardware from the start, but now it will run even on older hardware and at reasonable speeds, and without even Grok’s porous guardrails, just the safety measures built into the model. And even there, the process of users creating variations and modifications is well underway. Basically the whole Stable Diffusion toolchain and training environment got translated to Flux practically overnight.
On this side of the fence, the outrage over black SS officers never made sense. Do you want white SS officers and Latina scientists? Well, just update your prompt and generate the image you want instead of waiting for someone to do the work for you. Do you want a model with better representation? Well, just create your own modification or model that does that. There are plenty of examples of people doing it, and the barriers to entry are low. You don’t need to invest $100,000 in compute time to create your own model – you can mix and match models to create your own, or train your own add-on on your own hardware, or rent some capable cloud hardware for a very reasonable price. Do you want to add your own face to a latest-gen image model? It’s very doable, and surprisingly inexpensive.
But there are, as you say, downsides to all this, with porgography only the most obvious one. One recent open model project led by people deeply involved in the field decided the only way to proceed was to include no images of chidlren in the training data whatsoever.
So yes, this is all going to put some powerful tools in the hands of people who think mocking others for their beliefs are okay, as long as you mock the right people. I think we have to be able to say that cruelty is wrong, and also be able to identify behavior as cruel when it’s directed towards us. Without having to laugh along when people ask why we can’t take a joke.
I do appreciate the consistency.
I was trying to do some research into how AI would compare the Orthodox chrismation ceremony to the LDS ritual.
ChatGPT refuses to discuss the LDS temple in detail and gives respectful reasons why.