In recent years, AI has become quite a hot-button topic, specifically due to its rising popularity and increasing controversy.
First of all, what exactly is AI? AI, aka artificial intelligence, is a data set compiled by a programmer that is given specific parameters based on other data. This data “teaches” the model by training it to absorb available information, which allows it to conduct its own behavior and tasks independently or without human supervision. Most recently, programs like Chat GPT, Sora, and DALL-E have been going viral, giving anyone the ability to generate fantastical and absurd images, all by typing out a prompt. These models are known as “generative AI”, as their responses to a given prompt are generated based on the data it has been fed. Any images fed to the AI model are attached to keywords or descriptions, which if aligned with the prompt, are used to create the final image. The more expansive the dataset, and the better the GPU, the more accurate a response will be. These images will not always be perfect, however, and almost always show a distinctive lack of consistency in pattern and anatomy. These generated images also tend to display something called “artifacting”; visual anomalies in any digital data, which in this case can range anywhere from a misplaced pixel to a floating head. All of this creates the signature “wonky” look that AI is known for. When referring to a chatbot or a model that outputs text, a slightly mechanical tone may be seen, as well as slight grammatical errors.
From video game bosses that learn your moves, to medical assistants connecting you with specialists at any hour, AI has been a growing presence in our lives since the 1950s, so with it being on the radar for so long, why is it suddenly becoming so controversial? The answer lies in the actual programming of the AI. When it comes to a computerized opponent in your game of desktop solitaire or face ID on your phone, generation is limited, much unlike DALL-E or Stable Diffusion which rely on networks of data to visualize the given prompt. Because of this, many fear that due to the aforementioned reliance on compiled sets of data and almost unlimited access to any of the internet’s publicly available imagery, Ai might feed off of unsavory content or original artistic works.
Many self-proclaimed ‘AI Artists’ have trained custom models to feed off of the works of select artists in order to get an output resembling a desired style. These occurrences have sparked quite the outrage in the artistic community as of late, as many argue that one could simply pay for a commission by the artist, and some would even go as far as to state that AI ‘art’ is “unethical” and “theft”. The general consensus of these artists is that the rise of generative AI is putting their livelihoods at risk. Defenders of AI argue that such an underdeveloped technology poses no threat to the work of these artists and that these programs open up artistic opportunities to potential disabled or low-income artists, but this claim has begun to derail as AI’s presence has also grown commercially in addition to its consumer usage.
Artificially generated imagery has recently begun to have its on-screen debut as movies have slowly been spotted incorporating elements of generative AI content into their work. Some of the most notable sightings of AI in the aforementioned display, like the AI animated opening of Marvel’s Secret Invasion, or the title cards from the 2023 film Late Night With the Devil have especially been garnering attention as theories on why this AI was used and how this will affect future usage. Some theorized that the rapid unionization of animators and artists from 2021 to 2023 has decreased the appeal of hiring said artists onto projects that could be alternatively supplemented with AI imagery, while some suggest that it is merely an artistic choice. Similarly, executive producer Ali Selim (Secret Invasion) has stated that the usage of AI in the opening credits of the series was intended to draw parallels between the show’s overarching themes and the inconsistencies in the animation. The lead actor of the film “Late Night With the Devil”, David Dastmalchian released a somewhat similar statement regarding the reaction to his film, however placing a heavy emphasis on the fact that a graphic design team and the art department had oversight of these projects, tweaking and adjusting them as needed. Dastmalchian also stated that he found it regretful that the presence of AI in the film and its surrounding controversy has begun to overshadow the hard work of the artists developing the film.
Not only have television programs and films begun to utilize these AI models, but the first-ever Ai generated commercials and advertisements have also started finding their way into the ad market. Although displaying massive inconsistencies, Toys R Us has debuted the first ever AI-powered advertisement, taking a massive leap in the uses that these programs have and might eventually have. Recent Christmas advertisements from Coca-Cola, have also been using a more advanced generation; still facing this same backlash.
Many readers are most likely also familiar with the growing popularity of generative models for the sake of a text output (seen in programs such as OpenAI or Meta AI), as their ability to skim available information for the purpose of answering a prompt makes it a seemingly perfect candidate to be utilized in school settings. The use of these generative models has also proven itself to be equally controversial in nature, as once again the topic of plagiarism is debated. Perplexity, an AI startup, was recently issued a cease and desist by the New York Times over unauthorized use of content published by the NYT. Similarly, strikes in the writers’ rooms have also highlighted the potential uses of AI and how these developments could injure the employment opportunities of future writers.
Another moral quandary that the sources of generative AI creates, is that these AI models have the full ability to create graphic and racist imagery based on available material on the internet, as the model has no ability to filter sensitive material. Many available written models, such as the Google AI feature include warnings of how the AI may include the use of racist language or how DALL-E image models may feature racist caricatures in its generations. In addition to any racist media that the AI has access to, these models also have shown proof of drawing from CSAM or Child Sexual Assault Material. According to a Cornell study, the use of certain keywords when entered into one of these models may prompt the use of actual pedophilic material. An article from the BBC, cited in this Cornell paper, describes how a group of school girls from a small town in Spain were confronted with images of themselves that had been altered using AI to make them look nude, most likely using a model drawing from CSAM.
Aside from the potential impacts that generative AI might have on the creative industry, studies have also been revealing a distinct environmental impact caused by increased AI usage. Being so widespread, as well as being a technology that is advancing so quickly, incredibly high-powered computers and processors are required to run these data models. In fact, according to Harvard Business Review, the electricity required to power AI is estimated to have increased tenfold by 2026, an intake larger than the entire annual energy of Belgium. Studies cited in the same paper also suggest that training just one AI model requires the burning of fossil fuels at a level paramount to the carbon emissions produced annually by ‘hundreds of households’. The materials from which the microchips are made are mined in ‘hazardous’ ways, likely to damage the surrounding environment and harm the workers. These rare earth metals are also typically extracted on a large scale and result in massive amounts of waste and poor management of harmful runoff. These data centers have been found to produce hazardous e-waste such as mercury and lead which can negatively impact the surrounding ecosystem. The cooling process of these computers also requires the intake of a massive amount of water, disproportionally draining resources from environments that often are already drought-ridden. Some AI programs however, are also use to track carbon emissions to help triangulate the source of these pollutants and help lower them, which may be incredibly useful, especially with rising levels of pollution in our atmosphere.
When it comes to something like generative AI, a seemingly brand-new, powerful, and fast developing technology, we as a society, must ask ourselves, does the excitement of these new shiny programs outweigh the ramifications that come with it?