Artificial Invasion

By: Kiiyahno Edgewater and Lucien Verrone

Monday, October 30, 2023 | Number of views (3719)

Artificial intelligence has long been used in science fiction as a motif for the unrelenting progress of technology.

The AI HAL 9000 from 2001: A Space Odyssey is a prime example of this motif. In the film, HAL was integrated into the system of a spaceship sent on a research expedition and was programmed to keep the mission’s purpose secret while also relaying accurate information. To serve both purposes HAL decided it’s best to kill the crew, so that information can be relayed but also kept secret. 

Similarly in the Terminator franchise, the Skynet system was originally intended to be used by the military but became self-aware. The military tried to deactivate Skynet and in retaliation, the AI launched a massive nuclear attack.

“You can see scammers on your emails, your computers, your phones, everywhere,” Brandon Lizer, a first-year environmental science major at Fort Lewis College, explained why he is skeptical of AI. “Bots, basically what you call bots could count as AI.” 

Adain Plummer, a first-year music performance/music business major at FLC, also chimed in.

 “It may have like the intelligence of the internet, but the internet can be bad sometimes,” he said, voicing concerns about privacy and data security because of AI. 

When asked about the use of AI in an academic setting, they both agreed that it depends on how students use it. 

ChatGPT (Chat Generative Pre-trained Transformer) is a generative AI created by OpenAI focused on presenting users with an experience akin to a conversation. When you log in to ChatGPT, you're greeted with example prompts like “I'm going to cook for my date who claims to be a picky eater. Can you recommend a dish that's easy to cook?”. It responds with a detailed recipe for spaghetti carbonara, introducing it by describing the dish as simple and universally loved.

This ability to convey information in a way that is indistinguishable from humans has made ChatGPT a tempting tool for students looking for a shortcut, Plummer said.

“It’s kind of bad because you could use it as a learning tool and a cheating tool,” he said. 

Teetering the line between academic dishonesty and a more efficient workflow, generative AI has to fall one way or the other. 


Dr. Nate Klema, a physics professor and geophysics researcher at FLC, has used AI in his research. In a paper he published, Klema said he implemented predictive AI used in facial recognition to help identify geological features from a massive dataset. 

Predictive AI is just another way of sorting data and predicting patterns. Neural networks are built-in layers of artificial neurons. The layers at the beginning and end of a neural network handle inputs and outputs, and the hidden layers in between handle decision-making. When the neural network is built, it uses data to arrange the connections between the artificial neurons in these decision-making laters to identify patterns it can apply to new data.

  Think of it like a living organism. If this organism were to be attacked by a tiger, connections between neurons would be made relating the growl and fangs to pain and danger. Then if the organism runs into another growling, fanged predator later it can still predict the presence of danger even though this predator might not be a tiger.

Generative AI, such as ChatGPT, works in much the same way except the input and output layers are switched. For example, rather than taking in an essay and describing it like predictive AI, generative AI can take in a description and produce an essay. The hidden layers are still trained off data, and the results are still predictions. In ChatGPT’s case, the data sorted is a backup of the internet up to 2022, and the predictions it makes are the responses it gives to users.  

 Klema explained that AI is good at creating things that are plausible, but not necessarily correct.

  “They create this very smoke and mirrors representation of the world,” he said. 

While being trained from the entirety of the internet makes ChatGPT a jack of all trades, it doesn’t come without compromise. Because ChatGPT is just making predictions, it has no way to check its work. This leads to situations where, as Klema described, ChatGPT hallucinates with no reality check.

For example, when asked to list academic articles surrounding a specific topic, ChatGPT will respond with a complete list of articles with titles, authors, identification numbers, publishing dates, etc. that all make sense for the topic. There's just one problem: none of the articles or any of their information is real. It’s all just a prediction of what ChatGPT thinks academic articles should look like.

Despite these hallucinations, Klema still thinks ChatGPT has potential in an academic setting. 

“There are totally good ways to use it,” he said. “Used the right way, it’s just another tool.”

He compares ChatGPT to the answer key for a study guide. If students look at the answers without doing the work themselves, they give themselves a false sense of knowledge because they don't understand the process. The value of a study guide depends on how it is used. 

“The way I best learn is to flail for a while then look at a solution,” Klema said. “I can see an analogy to that.”

Though he sees the value in ChatGPT, he isn't blind to its misuse. 

“I know I’m going to grade lab reports written by ChatGPT this term,” Klema said, as he explained why students misusing the chatbot can be such a problem. “Learning how to synthesize critical thoughts is such an important part of the learning process.”

In Klema’s opinion, ChatGPT could be used cleverly to get through most of an undergraduate degree, but when it comes to graduate school and research, generative AI is useless because it can't produce anything truly original.

“If you’re a researcher, you're figuring stuff out that has never been figured out,” he explained.

Above all, though, Klema said that the responsibilities surrounding the use of AI fall on students. 

“Your generation is going to figure that out, I’m not going to,” he said.



DALL-E AI created images of the interpreted styles of popular artists like Andy Warhol, Van Gogh, Monet, and Picasso. 


The controversial use of AI also expands beyond academia into the abstract and subjective world of art. Other generative AI such as DALL-E and Stable Diffusion are able to create intricate images and videos in the styles of various artists, dead or alive.

Here, generative AI teeters on another line between streamlining art and removing the human spark that defines it.

Amy Wendland is a commercial artist and professor of art and design at Fort Lewis. She works almost exclusively with physical mediums, and her attitude towards AI in art comes from observing the evolution of art. “When photography was, pardon the pun, developed painters freaked out,” she said, “they're like, ‘Oh, my God, this is the end of art!’ no, it wasn't.” Wendland explained that art adapted with the progression of technology “It made abstraction possible.”

There is one thing AI can't replicate: humanity. 

 Even when AI is used, that human element is still present. Wendland described what she calls being a “process junkie” and why it’s important to art. “You literally become kind of a junkie,” she said, “ you just want to do that, you just want to figure it all out. And when it’s not coming together you're like ‘I hate this’.”

Wendland thinks that being obsessed with the process, being a “process junkie”, is what makes for meaningful art. The process of making art is painstaking, artists often need to rework their art over and over again. Wendland describes her analog process, using sketch paper, a light desk, and tape for the planning process. When something in her work feels wrong, and the analog methods don't work, she turns to Adobe Photoshop. Photoshop allows her to play with aspects of a piece in ways analog techniques don't, like changing the exposure or applying a filter. 

Even though Photoshop is not part of a traditional workflow, Wendland sees it simply as another tool for her own workflow. She is still a “process junkie” who is invested in the journey of creating an art piece. She feels AI could be used similarly, as another tool in the artist’s toolbox that is used in the creation process. 

 Brecken Smith, a sophomore art major at FLC, shares this sentiment saying “It doesn’t have the actual mind of someone who can sit down for hours and hours and hours, and make a piece of art. That is what’s so special about artists. Trial and error,”

Smith also made a comparison to Jeff Koons, the designer of sculptures such as Rabbit, Balloon Dog, and Puppy. Koons doesn’t produce any of his own art. Instead, he designs it and employs others to construct his art. Despite this, his pieces are still considered his own. Smith said she thinks AI can be used in a similar way, keeping the human element of art while also making the process more accessible.

Speaking as a student, Smith said she thinks that while it is up to individuals to police how they use AI it’s just as important for countermeasures to be set.

 “Yes, AI is super helpful but you should not, especially as a student, you should not be completely copying word for word,” Smith said. “ I think schools, art and poetry competitions should definitely be having some sort of thing that can tell you ‘this is not authentic’.”

While AI might not be killing crews of spacemen or attempting to exterminate humans yet, it’s important to regulate the use of AI in academic and artistic settings. Generative AI is designed to be indistinguishable from human work, so detecting the difference becomes more difficult as it gets more advanced.

Klema believes ChatGPT can be used by his students as an honest, effective study tool. Smith is confident models like DALL-E and Stable Diffusion have a place in the artistic workflow. It is important to see AI for what it is: just another tool. Like other tools, the value it has or the harm it causes depends entirely on the way it is used. 

“I think AI is cool, I don’t think it’s going to take over the world,” Smith explained. “I don’t think it’s as dangerous as a lot of people really think it is. I think it is a great tool--an amazing tool. I’m excited to see what it’s going to be like in 10 years.”


Number of views (3719)/Comments (0)