top of page
  • Writer's pictureAlex Jimenez

Don't Do This with ChatGPT!


Image by ThankYouFantasyPictures from Pixabay


Journalism has been heavily impacted by technology transformation happening all around us. The introduction of the internet and later of smartphones have resulted in a significant change to both distribution and the actual practice of journalism. While there have been some positives, e.g., the ability to cover developing stories real-time through social media, the overall impact has been negative, mostly from the impact to the financial model.

With the advent of large language models (LLMs) we are seeing technology once again impacting journalism. There are reports of some publications, such as CNET, using LLMs to write articles that would otherwise be written by a human.


While this is a worrying trend, I’m not writing this to explore the future of journalism – I don’t feel qualified to do a good job there. My point, instead, is to point to journalists themselves, or those that assign them work, on how they are covering LLMs.


Background

Starting in the fall of 2022, and with the introduction and quick popularity of ChatGPT, I noticed an increase in wrong-headed takes in and around the financial service and technology press on the topic of LLMs.


First, let’s understand how ChatGPT works and what it’s built for. Stephen Wolfram, physicist, researcher, and author, explains it well: “ChatGPT is always fundamentally trying to do is to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’ we mean ‘what one might expect someone to write after seeing what people have written on billions of webpages, etc.’”


When I posed the question to ChatGPT itself it described what it does thus: “I am a machine learning model that was trained on a large dataset of text. I use statistical patterns in that data to generate responses to the prompts I receive.” In other words, it is continually stringing words, sentences, and paragraphs together by using its large dataset to anticipate the next piece. It’s like letting autocorrect loose, only this time it’s much more elegant.


If you ask ChatGPT to write something given a defined scenario, it will generate a piece that is usually grammatically correct and covers that scenario. If the response is larger enough and you read it carefully you will note that there is something missing. It often repeats itself, “makes up” ideas and sources, and is often incorrect. However, on the surface it is a remarkable feat that gives a reader a sense that the “machine” understands the subject matter and the responses are dead on. It is this feeling that has allowed many people, including those writing in financial service and tech press, to be carried away.

What’s The Big Deal

As noted, ChatGPT is trained on a large dataset. The dataset used by OpenAI, ChatGPT’s maker, is “called the Common Crawl, which is a publicly available corpus of web pages, (that) includes billions of web pages.” This dataset was also augmented by other datasets such as the entirety of Wikipedia, news articles, and books. The more text is used, the better the LLM can anticipate the next text to “write.”


As people use ChatGPT, they are asking the model to respond to specific questions or prompts. If the users don’t understand the source of the responses, the way that ChatGPT responds fools them to believe that the answers are vetted answers based on some sort of cognitive engine.


The danger comes when those questions are for highly specialized areas, where true understanding and cognition is important. For example, when a user prompts the model with symptoms and asks for a diagnoses, the model can respond by giving the statistically best response that comes from unrelated sources and not specifically based on medical databases. While OpenAI does insert certain warnings to these types of prompts, users seem to either ignore them or bypass them by prompting the model differently. The old computer science adage “garbage in, garbage out” (GIGO) applies here.


Example from Recent Articles

As I said before, I noticed articles based on a misunderstanding on how LLMs and ChatGPT work. The barrage of these articles had me start a list of the ones that gave me pause or just plainly annoyed me. Here are some examples:

  • From AMBCrypto: I asked ChatGPT about Shiba Inu’s price, it said that SHIB will rise by 50% within…. Here is an excellent example of a user circumventing the OpenAI warnings. When ChatGPT wouldn’t answer where the price of this token was going, the writer asked the same question making the model pretend it is writing a movie script. It is a clever “hack,” except the writer then goes on to use the response as ChatGPT’s prediction of where the coin is headed. Don’t base your investing on ChatGPT’s predictions!

  • From The Points Guy: I gave ChatGPT complete control of my city break. Here's why I wouldn't do it again. I’m glad that the writer learned a lesson. However, the premise is a ridiculous one. He asked ChatGPT to plan his next vacation from London given some specific parameters. ChatGPT suggested Barcelona, but the writer tried a second time and it suggested Lisbon. By the way, he was annoyed that the second try gave a different response. This is common behavior as the model runs a prediction anew each time. So, he visited Lisbon and followed ChatGPT’s suggestions. A vacation isn’t life or death, but turning over your decisions to a model that is just predicting the next set of text doesn’t make much sense. Don’t plan your vacation using LLMs, unless that is what they are built for!

  • From Insider: ChatGPT: LeBron James Is Not Among the Sports GOATs. I felt dumber after reading this article. The writer says: “While we specifically asked for the sports (greatest of all time) GOAT, the mention of Jordan without James puts a big dent in the basketball debate.” No, it doesn’t. By the way, Gretzky, Muhammad Ali, Jesse Owens, Garfield Sobers, Mia Hamm, Sonja Henie, Cristiano Ronaldo, and so many others are not on the list. Don’t settle dumb sports bets based on ChatGPT. It doesn’t know anything!

  • From GoBankingRates: How To Create a Budget Using ChatGPT: A Step-by-Step Guide. You would think that a website that is all about financial advice would get it. The chances that whatever spits out of ChatGPT, as far as a budget, is comparable to what an accountant, financial advisor, or banker will give is more than laughable. Given that there is basic advice found in most bank websites, you’re going to get a basic response likely based on that. It is a generic response that doesn’t keep a person’s very specific situation in mind. Don’t ask for financial advice from a fancy parrot!

  • From Finbold: ChatGPT gives 10 reasons why you should buy Bitcoin. The crypto press is enamored with the idea of using ChatGPT for investment advice. Here is another instance. This article is a great example of the plain vanilla responses the model spits out. Once more, don’t take advice from ChatGPT on investing strategy!

  • From GoBankingRates: How To Save Money on Car Insurance Using ChatGPT. No, this was not written by the same person as the previous entry from GoBankingRates. In this article, we are asked to pay for the ChatGPT Plus to get real-time data as part of asking for advice. Easier than all of that is to go to reputable auto insurance company’s website, read their evergreen article about saving money on car insurance and then visit a comparison website and pick the best one for you. I just saved you the $20 per month that OpenAI charges for ChatGPT Plus. Don’t use ChatGPT for things that you can do on your own without the suspect advice!

  • From Make Use Of: How to Use ChatGPT to Improve Your Entire Lifestyle. Why bother using a text predictor on steroids for some simple task? How about turning your whole life to it? The subheading to this article is “From dieting and fitness to productivity and fun, you can use ChatGPT to make positive changes throughout your life.” All the examples in this article ignores the fact that this isn’t a tool for any of the goals given. Further, it over promises with subtitles like “Create Personalized Fitness Routines With ChatGPT.” How personalized are the solutions given since it is not an expert in fitness nor does it know your goals? Don’t trade expert advice for generic responses driven by text prediction!

  • From EurasiaReview: AI ChatBot Warns Of Nuclear Risks In A Militant Political Climate. Now that we have given up all our personal decision making to ChatGPT, why don’t we prompt it to scare us? This “interview” with ChatGPT is as irresponsible as it gets in the use of this technology. Don’t get your geo-political positions from your neighborhood LLM!

And finally, from NBC News: People are using ChatGPT like a personal trainer. This isn’t an example of the kind of article I’m listing BUT it describes the kind of wrong-headed use of the model that I’m ranting about.

Conclusion

As we look at the impact of AI, and LLMs in particular, we need to be ready to call out uses that are wrong-headed. ChatGPT is not an example of artificial general intelligence (AGI), as many people seem to be assuming. We are many years away from AGI that will allow us to rely on for some level of understanding of the problems we pose to it. LLMs are extremely powerful, and we are just scratching the surface on uses. All of us that have some part in shaping the general public’s understanding of these tools, need to be careful on how we explain capabilities and limitations. Journalists that cover technology, and that aim to translate the technical for the layman businessperson, have to do a better job than getting carried away in the manner I have described here.


A version of this article was published on LinkedIn here

87 views0 comments

Recent Posts

See All
bottom of page
Mastodon