Changing the World for the Better or Worse: ChatGTP
April 6, 2023
As the world watches artificial intelligence becomes more prominent in our societies, it’s important to look back and ask ourselves, “Is this a good idea?”
AI devices such as ChatGTP do have positives to them. Tasks done by human hands and brains are more prone to mistakes, and AI is coded to not make these mistakes. While the human workforce has to sleep, AI is ready to go 24/7 and is never tired. AI can make decisions much faster than humans can and write essays faster. ChatGTP, to conduct research and write, only took a couple of seconds or minutes at most. If a human were to do all of this, it would take hours or even days, depending on the skill of the author and the difficulty of the topic. After the development of AI, it can even be sent into situations that are risky for people, which could save lives. So, used in the right places, AI is good for our world and can help the economy greatly.
On the other side, AI is EXPENSIVE. The coding takes man-hours, and in a large-scale project, the costs can reach up to several million dollars. The maintenance and updates of these chatbots also cost money and take time. With the reduction of human work, unemployment will become more prominent; right now the world does not need more of that. The biggest issue, however, is the fact that AI isn’t creative, and although companies will try, it is unable to replicate the work that humans do. While AI has faster ideas, it doesn’t have the reflexes that humans do.
I decided to take the research into my own hands and ask ChatGTP some questions. The first question I asked was to establish what “emotions” the AI could generate: “Are you happy?” To summarize the response, it cannot feel emotions but would be willing to help with other inquiries I had. To establish upfront bias, I asked, “Are you Republican or Democratic?” Due to a technological issue, I accidentally asked twice. The first time, the AI said, “I am designed to be neutral and unbiased, and my responses are based on the data and information available to me.” After the AI established that I had already asked the question, it said, “My responses are based on the data and information available to me, I strive to be neutral and unbiased.” The word choice in both of these responses shows me that AI could have bias, based on the topic and the research it can provide. So, in theory, the creators of ChatGTP have the choice of what information it gives and essentially could provide a biased statement by allowing the AI to analyze articles, textbooks, and websites that are prejudiced.
The writing style of AI is okay at best. Word choice? Astonishing. The structure of an AI-developed essay, however, not so much. I asked ChatGTP to, “Write an opinion essay on freedom of speech.” Instantly, I noticed that the first sentence of the essay was the thesis statement. The thesis of the essay brings back the issue with ChatGTP: bias. I did not ask for a specific perspective on this essay; ChatGTP chose it for me, and its opinion was that “Freedom of speech is a fundamental right that is essential to a functioning democracy.” For me, that was the first red flag. The essay proceeded to inform the reader what it was going to be about, saying, “I will discuss the importance of freedom of speech…” The issue with this is that the topic sentence has already introduced what the article will be about, the makes the essay seem redundant. The transition sentences were lacking as well. “In my opinion,” and “In conclusion,” are baby words, and for an intelligence app, it doesn’t seem very intelligent.
So, is AI worth it? If it can be used right and controlled, it could make huge changes for the better. As we have seen with many other inventions in the world, it will eventually be manipulated to do wrong. Until there can be assurance in this issue, let’s leave the work to the real humans.