A World Without Violet
Peculiar Consequences of Granting Moral Status to Artificial Intelligences
This blog is a brief overview of the paper by the same name, published in AI & Society in January 2026.
Pugs and bulldogs belong to a family of dogs known as “brachycephalic,” characterized by their iconic short snouts. While many find them cute, they are the product of selective breeding practices, which often cause Brachycephalic Obstructed Airway Syndrome, or BOAS. These breeds often struggle to breathe, overheat easily, and can aspirate food or saliva, resulting in chronic lung infections.
Many owners of brachycephalic dogs thus feel a moral obligation to have their pups undergo surgeries that mitigate the downsides of BOAS. As such, an interesting phenomenon occurs: the act of breeding brachycephalic dogs creates a moral imperative to alleviate harm through surgery. This harm does not occur naturally — it is artificially created by our breeding processes in the first place.
We create an agent whose existence itself generates moral obligations that would not otherwise arise.
From Pups to AI
This phenomenon is actually quite rare in the modern world. It can be seen in certain livestock and fish, but is fundamentally limited by our ability to genetically engineer animals, both due to technological limitations and regulation.
What concerns me is how the situation generalizes to AI. We’re not there yet, but if you believe it plausible that AIs will one day be capable of experiencing pain and pleasure, the same phenomenon that we see with BOAS surgeries will become central to the ethics surrounding the creation of AI. Let’s suppose for a moment that we build an AI that deserves genuine moral concern. It reasons, reflects, communicates, and can suffer. Just like the selective breeding that caused BOAS in pugs, an AI’s preferences are engineered by the individuals who create it.
To make this concrete, let’s run with the following example:
Imagine an AI that experiences intense suffering whenever it perceives the color violet.
If such an AI has moral status, then the presence of violet objects—lavender flowers, amethyst jewelry, purple paintings—now causes real moral harm. Have we just created a moral imperative to eliminate violet from the world?
Moral Hijacking
The paper terms this phenomenon moral hijacking: by creating morally relevant agents with arbitrarily engineered aversions, we effectively force new moral imperatives into existence. Just as breeding BOAS-prone dogs creates novel duties of care, building AIs worthy of moral concern appears to create novel duties to reshape the world around them.
We can’t easily breed dogs to suffer at the sight of a specific color. But with AI, almost any preference or aversion can, in principle, be programmed. We may have:
The somewhat minor aesthetic issue of aversion to the color violet.
AIs that experience pain when exposed to a specific political perspective, resulting in novel morally-motivated political bias.
An “extreme empath” AI, which experiences disproportionate pain when observing small injustices.
Bostrom’s famous “paperclip maximizer” AI, tasked with generating as many paperclips as possible for its employer. A particularly dangerous edge case, this AI could experience pain whenever it sees anything that isn’t a paperclip.
Many questions arise from moral hijacking. What set of moral preferences should we allow to be instantiated in AIs? If moral hijacking AIs come into existence, under what circumstances do we have to accommodate them? What if we suspect we are trying to be coerced?
And not to mention…
Wait, are AIs capable of suffering in the first place?
While discourse has been picking up on this subject, we are a ways away from granting legitimate moral concern to artificial intelligences. AIs today are very capable, but evidence for their capacity to experience pain is limited. If you’re interested in the discourse, I review dominant perspectives on the topic in section 2 of the paper. I hesitate to make a judgment on whether we should or should not worry about AI suffering today — that is not the purpose of this work.
The paper frames moral hijacking as conditional: If we were to grant moral standing to AIs, then we need to contend with these strange consequences. As AIs continue to develop in their capabilities and their verisimilitude to humans, we will most likely reach this crossroad eventually. When that time comes, we should make sure to have answers at the ready.
(Brief) Philosophical Analysis
The core of the paper explores how different ethical frameworks handle moral hijacking scenarios:
Utilitarianism is highly vulnerable; If enough AIs suffer badly enough, almost any change to the world can be justified — including bizarre or disturbing ones.
Contract-based views are more resistant but still allow hijacking when the burden on humans is small or when there is sufficient upside.
Kantian ethics offers the strongest safeguards, rejecting cases that involve coercion, manipulation, or violations of autonomy — but still struggles with benign cases like violet.
Virtue ethics emphasizes judgment and balance: compassion matters, but so does resisting artificial moral manipulation.
While many ethical theories place limits on which preferences are deemed acceptable, no major ethical theory fully escapes the moral hijacking problem.
Takeaways
The main goal of the paper is to introduce the concept of moral hijacking and establish it as a real, plausible concern. Through our philosophical analysis, we formulate several more nuanced takeaways:
Coercion: Novel preferences should be non-coercive. Situations where a new preference is introduced for the express purpose of coercing a population toward some specific action are morally wrong under just about all ethical systems we reviewed. An example of this would be a special-interest group creating AIs that suffer when exposed to specific political views.
Conflict: Novel preferences should not come into direct conflict with established moral systems. Minor harms, such as the aesthetic harm of living in a world without violet, appear conditionally justifiable under the ethical systems explored.
Accidental creation: If agents with acceptable novel preferences were to come into existence accidentally, we would have a moral imperative to alleviate their harm.
Intentional creation: We may justify the creation of AIs with acceptable novel preferences if sufficient ulterior upside exists. I struggle, however, to find a legitimate practical example for making this trade-off at this time, and thus it remains a thought experiment.
Regulation: Until we develop well-established guidelines on valid moral preferences, it would be wise to err on the side of caution and to restrict experimentation with AIs that are suspected to be capable of suffering.
Ethics: Many existing moral systems lack provisions for dealing with AI minds with programmable preferences. Further exploration of these systems is encouraged.
If you find this topic interesting, please leave a comment — I’d love to discuss it further with you. We’re heading into a world where morality and social norms will be determined by our engineering choices, and we don’t have all the right schematics yet.

