Skip to content

Let’s Talk About AI

Person holding a tablet interacts with a virtual AI interface, touching a digital chip icon labeled “AI” with circuit lines extending outward. Futuristic overlays and data elements suggest artificial intelligence, machine learning, and technology integration.

Artificial Intelligence, or AI, has been a major topic of discussion in recent years. With the release of programs with a wide range of surprising new capabilities, the discussion has shifted toward how much it actually helps and whether it is ethical. 


This extended post is all about helping you better understand some of the issues at hand and make informed decisions for yourself.


We will discuss how the term “AI” is a little bit of an oversimplification of what these programs do. We will discuss whether using AI might sometimes amount to plagiarizing someone else’s work. We will talk about the wide variety of potential use cases for AI programs.


So let’s dive in!

What exactly is AI even?

In all the heated discussion around AI, it can be easy to lose sight of what these programs do or how they work. It’s understandable since these programs can feel so abstract compared to what we are used to! But it’s also a problem because the way these programs work can impact the ethics of their use.


One popular image of AI is essentially that of a human brain in computer form. Prior to the rise of AI programs, it might have been the image in most people’s heads when they thought about AI, thanks to sci-fi and popular media. The reality of AI today is a little more complicated than that. In a nutshell, today’s AI learns to perform specific tasks and refine its own processes in ways that in some contexts appear to mimic human intelligence.


To be clear, we do not have the knowledge or expertise to say whether a true AI, the way we imagine it in science fiction, is possible in the future. We are just drawing a distinction between the popular image of AI and AI programs as they currently work today. Let’s start by taking a close look at the concept of machine learning.

What is Machine Learning?

Machine Learning is a subset of AI development that allows programs to train themselves on datasets. The topic of machine learning itself is a huge one, and we will not be able to cover all the complexities here! 


The important thing to know for our discussion is that AI programs often use machine learning to continually refine themselves. Some might rely on a human telling them whether their output is right or wrong, and others might be able to test themselves. 


Each machine learning program works in its own way, but they all rely on a complex set of probabilities that get adjusted through the machine learning process. It’s what helps AI programs appear so complex and sophisticated compared to other computer programs we might be familiar with.


If that all sounds very complicated, here’s a more simplified version: machine learning helps AI programs get better at the task they are assigned to do.

Common Non-Controversial Uses of Machine Learning

Machine Learning and AI are typically at their least controversial when assigned to perform discrete objective tasks that might be easier at scale for a computer than a person. 


A classic example is image recognition. It’s objectively fairly straightforward to be able to tell whether a program has correctly identified an image. Likewise, some AI programs are made to refine images to be sharper and higher quality. 


In these use cases, it is much easier to have the discussion of whether the program is doing its job effectively and correctly, as opposed to whether it is right to be using it in the first place. Currently, much of the controversy associated with AI programs is associated with Large Language Models. Let’s take a closer look.

What are Large Language Models?

AI programs can be created to perform a variety of tasks, but the ones that most commonly come up in popular discourse are Large Language Models (LLMs).


Large Language Models are associated with some of the most popular AI programs, such as ChatGPT and Gemini. The way people talk about LLMs can often be a product of how they feel about LLMs in general, so there is a wide variety of perspectives on it. Here you can find a nuts-and-bolts explanation of the fundamentals, while here you can find a version that offers much of the same explanation, along with extensive writing on the possibilities associated with LLMs.


One helpful way to think about LLMs, as noted in the first linked post, is “mathematics running at a ridiculous scale.” It’s hard for us as people to think of language as something that can fit into a series of predictable patterns because language itself is so complicated. But it is possible for a powerful enough computer to use statistical probability to interpret and respond to text. To the casual observer, the imitation may be powerful enough to feel like genuine artificial intelligence. But the program in question is not so much choosing what it wants to say as pulling what it has statistically determined is the “correct” response.


It works in lots of situations! But language is also a reflection of our individuality and personal expression. LLMs are not a replacement for our own thinking and expression because our own thinking and expression are not just a product of probability.


That is not to say LLMs can’t have their uses! Let’s talk a little more about why people might find them helpful, and why their use is sometimes controversial.

What is the controversy around LLMs?

Much like the programs themselves, the controversy around LLMs is complicated. There is a discussion to be had about when it is appropriate to use those programs and when it is plagiarism. There is a discussion to be had about whether LLMs actually help with the creative process or not. 


There is also discussion as to whether all of these programs sourced their data ethically. There is further discussion still about whether these programs are worth the environmental impact and the ripple effect of such enormous financial and resource investment into AI development. 


These are all important discussions that deserve their own space, but let’s start with the most basic one.

Are LLMs inherently bad?

The short answer is no! The slightly longer answer is that there are ways to create and use LLMs ethically, and you are unlikely to find someone who would argue that LLMs can never be used ever by anyone. When there is controversy around an LLM, it’s usually with regard to how it works or how it is being used. So let’s take a look at some of those controversial use cases.

LLMs and Plagiarism

Many people were first introduced to LLMs through ChatGPT, and ChatGPT came with the ability to output large quantities of text with a simple prompt. ChatGPT could write a whole essay in seconds when it might take you hours or days! 


Naturally, many people tried to pass off ChatGPT’s work as their own. But ChatGPT is not a replacement for your own voice and research! Not only has ChatGPT been documented to make factual errors and fake citations, but it also writes in its own distinct style that is identifiable from your own. It is harmful to pass off others’ work as your own, not just to other people but to yourself. 

LLMs and the Creative Process

LLMs present a shortcut in the creative process that saves a considerable amount of time. It can be tempting to think about the volume of output available to us when we can effectively rely on AI. But it’s also worth thinking about what it might cost you to rely on AI. 


The creative process can be long, boring, and frustrating at times. But it is also how we develop our own unique voice and perspective. It is what sets us apart as individuals whose work has a value that is unique compared to that of others. When we use LLMs to take over our creative process, we are giving up our ability to work on the tools that help us better express ourselves to the world.


On top of all this, output from LLMs and other AI programs is not copyrightable.


That is not to say LLMs can’t offer a helpful prompt or starting point. But it is also worth considering the long-term value of putting more time and effort into work that is uniquely your own. 


Of course, there is more to the question of LLMs than the ethics of how we use them as individuals. There are also ethical discussions to be had about how LLMs are created. Let’s take a closer look.

LLMs Using the Work of Artists Without Permission

We talked earlier about how machine learning and, consequently, LLMs rely on large datasets to create convincing output. But where exactly do these datasets come from? Sometimes they can be purchased. Some organizations try to create datasets for LLMs to train on, though there are questions as to their effectiveness. 


As the programs get more and more sophisticated, the need for data gets larger and larger. Increasingly, this has led to cases where LLMs or image generators have used the works of artists without their permission


This issue is still in contention under the law. But it is worth considering ethically whether it is OK to use programs that draw on work from artists who have made it clear they did not want their work to be included in the first place. Is it possible to find LLMs that ethically source their datasets? These are questions worth considering if you find yourself wanting to try those tools.


Images are the most famous examples, but AI text summaries are also a way of drawing attention to AI output instead of the work of the original writer.

The Impact of AI as an Industry

AI has experienced a major surge in investment and activity in the last few years. AI doesn’t always feel like an industry the way a factory or a refinery might. But the computing power required to run some of these programs is enormous! 


The growth of AI has an environmental impact, like the growth of any other industry. An industry should not be dismissed merely for having an environmental impact, but it should be able to account for the impact it has and ensure it is conducting its business practices safely.


AI also comes with questions of labor ethics. We talked earlier about how sometimes human verification is part of the machine learning process, and that is no different with LLMs. LLMs often rely on human labor both to train the program and, in some cases, to help when the program cannot. Such labor is not highly visible, as companies want their products to appear as automated as possible. We want to be clear that human labor is a normal part of the machine learning process. But it is also worth considering a company’s hiring practices and how it treats its labor in the process of building its models. 


Finally, the glut of investment in AI has led to skyrocketing prices of common computer components, making a huge range of electronics far more expensive for average consumers.


Even once we have settled the ethical question of whether it is OK to use LLMs in certain circumstances, there is the greater question of how much we want to invest in these tools, given the costs and ripple effects. 


We as individuals may not have a huge impact on the way things are decided, but it can help inform us in the choices we make and what products we choose to use.

How Can I Decide?

Like many people, you may see the potential value for some forms of AI but are also concerned about using such programs ethically. Check out our decision tree below. It is a little different from other decision trees in that it does not tell you exactly what to do. Rather, the goal is to help you recognize when you might be dealing with one of the controversies we discussed above when using a given AI product. It’s up to you to learn about the specific program you are considering!

Green squiggly line to mark the end of the blog post
Next article The Visual Supports Series Part 3 - Visual Supports as a Teaching Tool

Leave a comment

Comments must be approved before appearing

* Required fields