No good ever comes from mechanical intermediation of humanity — ασπ
Let’s pretend generative AI models continue expanding their capabilities and start generating more and more modalities of digital content like audio and video.
There are a few obvious things that happen when we perform this extrapolation. First thing is that it becomes impossible to tell the difference between human and purely machine/synthetically generated content. If I had reworded the content below then it would have been impossible for most people to tell whether it was generated by code running in Azure’s data centers or by a real person.
In other words, there is no way to tell the difference between human generated content and machine generated content in a future where there are generative models for all modalities of digital data like text, video, audio. ChatGPT continues hedging its answers but that’s because it was designed that way. It does not take any extreme positions for “safety” reasons which were determined by Sam Altman and his team of programmers. No one knows how it is defined but yet it is obvious that ChatGPT never takes any extreme positions. It will never say eugenics is good, for example, even though there are arguments in its training data that make exactly that case. The reason those arguments will never be surfaced is because the folks at OpenAI have reached a consensus and determined that such arguments would be convincing for a large enough group of people and that it would be problematic.
A human, one creative enough to see far enough in a symbol space, can extrapolate the obvious implications of all this into a future where all content is customized to fit a statistical profile of each person. No one will ever have to see anything that offends them or elicits any feelings that the statistical model has determined to be “detrimental” to their well-being as measured by some arbitrary and formulaic metric. The content you see below is again waffling about the possibilities but it is clear the training data already includes strands of thought along these lines. Removing the hedges makes it obvious that whoever is in charge of this technology controls what people will think and feel if they start believing that the output is real.
So you now have a choice. Keep using generative models to generate content that has no humanity and is statistically indistinguishable from human generated content or start grappling with the real ethical implications of a mechanically intermediated reality wherein humanity is a slave to machines. The choice is up to you. I’ll let ChatGPT have the last word on this.