While I have joked about it frequently in previous posts, I do casually use AI like many people from time to time. On more than one occasion, I’ll open up ChatGPT and type something in, especially if I feel like it’s a little too specific to search for a search engine alone. I know it’s not best to treat ChatGPT as a full-on replacement for a search engine, but for more specific queries, it’s proven invaluable for my needs for the most part.
An excellent example deals with how I’m using Linux full-time at home on my desktop. When I feel like I may need to troubleshoot something on Arch, my two options these days are as follows:
- Visit a search engine like DuckDuckGo only to tread through several irrelevant results, possibly stumbling across YouTube “tutorials” by Roel Van de Paar. Other times, I’ll have to sludge through the Arch Linux forums or Github in hopes somebody already solved the issue before me.
- Visit ChatGPT and describe my problem in detail, which allows it to turn into the fastest-responding forum user ever with an almost infinite amount of patience— just so long as I’m willing to troubleshoot if it doesn’t solve my issue right away.
ChatGPT has steered my slightly wrong at times (more on that later), but all I usually need to do is follow up on its advice with whatever went wrong. That’s when it has a chance to correct itself and give additional advice.
This is what I’ve wanted to reflect on: my reflections of how useful or limited ChatGPT in particular can be at times when I’ve tried to use it. I’ll be exploring the good, the bad, and the questionable in greater detail starting with the positives. Maybe I’ll give readers ideas on what to do with ChatGPT… while revealing its limits.
Unexpectedly Useful!

As I’ve already mentioned, ChatGPT can take take the place of a Linux support forum user in terms of helping someone troubleshoot. I love my Hyprland setup on my Arch Linux system, but for a good while, I haven’t had a “night mode” blue light filter set up with it.
It always seemed rather daunting for me to comb through some threads on the Arch Linux BBS or, even worse, Reddit, to see if somebody else was attempting to do the same thing. I think the reason I hadn’t set this up is because I felt like setting up Hyprland in the first place was such a huge task for me the first time I tried it over a year ago. Still, blue light filtering worked perfectly on my Awesome WM, so why not have the same on Hyprland?
All I had to do was open a tab with ChatGPT. Upon typing, it gave me some instructions on multiple ways to set it up. Of course, it can be prone to mistakes, and it was, but all I had to do was follow up with more details on what didn’t work. That led to more suggestions and feedback to help further troubleshoot the issue. Eventually, I got a setup I was happy with thanks to its suggestions to use wlroots. It was a much simpler solution than I expected, where I thought I was going to have to finagle around with using Hyprshade shaders and systemd.

Another example deals with my attempts to gamify The 12 Week Year, which I started back in late December. However, I was having issues for the past two week trying to tally up points. At first, it was no problem to keep up, but with after asking ChatGPT what I could do, I hatched the idea of writing on paper to have a “Weekly Accountability Meeting” (or WAM, for those who read The 12 Week Year) with myself. It not only suggested a possible template of questions to ask myself and respond to, and that’s when I realized what my biggest issue was.
It turns out my system was working fine except for one huge bottleneck: adding up XP. At some point, it became a slow chore to write down the point values by hand, go through multipliers, and add everything up.
When I had this realization, all I had to do was ask ChatGPT about suggestions to solve this issue! It correctly identified the same friction I noticed when writing and tracking on paper, offering great suggestions. The solution I ultimately picked, was setting up Todoist, IFTTT, and a Google Spreadsheet to track tasks and XP values automatically. ChatGPT was even kind enough to write me an App Script to use in my Google Spreadsheet (I would have zero idea otherwise on how to even start writing such a script on my own.) to automatically extract XP values, factor in multipliers, and add up everything. It saves so much effort and it’s ultimately a lot less of a headache to keep tracking my self-improvement now.

Additionally, I felt like cooking a few chicken wings the other day, but I dreaded the idea of having to use a regular search engine to look up anything cooking-related. As I’ve learned from my girlfriend, cooking and baking websites will do everything they can to delay listing the recipe and instructions, prefacing it with an overly-long preamble, images, ads, and other filler, forcing us to scroll down. I truly despise dealing with cooking sites like this, and I hate it even more when I do click, scroll all the way down, and realize that it’s not the specific recipe I was looking for.
ChatGPT makes this a thing of the past. Now, if I feel like making chicken wings, I’ll just ask it for a good recipe, and it’ll give me exactly what I want. Better yet, I can ask for wing recipes that are more specific, like a dry rub with lemon pepper, or a hot sauce and Dijon mustard blend, or even breaded wings of all kinds. No long introduction about “My family made these wings every week for 20 years…”, no long load times, no flashy banners linking to other recipes, just the list of ingredients and instructions as it should be and always should have been from the beginning.

Speaking of recipes and food, ChatGPT can be incredible for something else: nutrition facts. If I ask it for a recipe, I can follow up by asking for the nutrition facts of the recipe itself. Better yet, if I have my own recipe for something, like my favorite overnight oats, I can list the ingredients and ask ChatGPT to tell me the nutrition facts for it.
Overall, ChatGPT can be extremely helpful for practical tasks and ideas in this way, but what about when it falls short?
Unexpectedly Lacking…
I’ve discussed how great ChatGPT has been for me thus far, but what about places where it fails?
I’ll start with the elephant in the room: I don’t like how it reacts at times. I guess training it so heavily on Reddit data results in it writing in a somewhat annoying, overly-validating manner. Ironically enough, a Redditor stumbled into a prompt to enter into ChatGPT that stops it from writing in this (occasionally grating) voice and makes it much more direct, logical, even cold. Predictably, this went viral, resulting in many people trying it out for themselves.
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
Here’s the prompt in case anybody wants to try this without having to visit Reddit to copy it. You’re welcome.
Depending on how one feels, I would argue this should be an option out of the box. I can see others arguing this should be the default for any text-generative AI in general, especially if one happens to care more about useful information instead of whether a chat-based AI can pass a Turing test.

There are also times when ChatGPT makes a careless mistake in general. I once asked it about statistics on who else out there was regularly typing on a non-standard keyboard layout such as myself. It started to explain Dvorak, Colemak, and Workman in detail with a little chart. That’s when it incorrectly claimed that the home row for Colemak was QWFPGJLUY;. I’ve been typing in Colemak for years, and in reality, these keys are from top row, not home row.
Upon myself explaining that ChatGPT got this wrong, it responded with a validating response prefaced with “My bad! That’s a sharp observation and a mistake on my part!” It corrected itself in its response, but was still wrong about the home row for Workman, which I noticed was also its top row.
That’s when I noticed that ChatGPT seemed to heavily favor Workman over Colemak and Dvorak for some strange reason. I couldn’t even explain how or why. Honestly, the more I looked into Workman a short while back, the more I realized there were shortcomings here and there that its zealots tend to suppress or ignore. Granted, I’m not claiming that Colemak is a perfect layout by any means, but I am aware of its potential pitfalls. Meanwhile, ChatGPT was practically glowing about Workman being the best alternative to QWERTY while ignoring its cons, such as same-finger utilization not being as good.
When ChatGPT isn’t making small oversights, it’s getting things entirely wrong. I’m not talking about the infamous example where ChatGPT recommended adding glue to pizza to change the texture of the cheese.

Another example deals with Linux troubleshooting… wait, again? Of course, as I was editing this whole post in Obsidian over the past few days, wouldn’t you know it? A recent upgrade broke Hyprland. For anyone who actually uses Hyprland like myself, shocking, I know. Of course, started by discussing ChatGPT’s use in relation to Linux tech support. Still, after running an update on Arch, Hyprland started to crash upon login on my display manager.
I had to login to my neglected GNOME desktop (I’m pretty sure my Awesome WM setup is broken from lack of use) to start some troubleshooting. After struggling a bit with keyboard settings (something unrelated), I was able to type on ChatGPT in English for troubleshooting. While I wouldn’t say it was entirely unhelpful, it suggested a few commands that didn’t actually work sometimes, notably adding —cleanbuild flags to the end of yay instead of something like —rebuild. Attempting to use the former will result in an error that yay doesn’t know what the —cleanbuild flag is.
Eventually, I gave up when it felt like ChatGPT’s suggestions were taking me in circles. After having to put everything on pause to drive to work, I returned hours later and signed back into GNOME to continue troubleshooting.
That’s when I searched online more specifically for my terminal output errors and was finally given the most helpful suggestion on a Github thread:
- Uninstalling and purging everything Hyprland (except config files, of course)
- Disabling the Chaotic-AUR
- Reinstalling all git versions of packages, including
pyprland-gitinstead ofpyprland
Only after this did my Hyprland desktop finally launch again. This whole time, all I had to do was follow some instructions written by a human who had the same issue.
How much faster would I have been able to troubleshoot this if I had just searched online and gone through a few Github links to fix this, I wonder? Still, while I did have a success story about ChatGPT helping me solve a tech issue, YMMV.

As for another non-Linux example, I remembered reading The Taming of the Shrew around 20 years ago in high school. Something that bothered me years later was wondering what happened to Sly, the drunk beggar, by the end of the play. The induction begins with Sly passing out from some heavy drinking, being found by a Lord and his entourage, and the Lord hatching the outlandish prank to gaslight Sly into thinking he’s a powerful Lord himself when he finally comes to.
Anyway, this strange pair of scenes just comes along with the Lord and his friends stringing Sly along, trying to make him think he’s some powerful noble. After the Lord’s page, disguised as Sly’s supposed wife, sits next to him to keep the ruse going, they start to watch the actual play, The Taming of the Shrew, which makes it a play-within-a-play.
Once that ends, we never get closure on Sly’s fate. What happened? Did Shakespeare just forget to write an ending for this guy, or did he think nobody would notice? Of course, it’s not likely that somebody of Shakespeare’s prestige would overlook a loose thread like this.
Why did I go on this tangent about a play by William Shakespeare? It’s because I decided to ask ChatGPT about this unraveled plot thread, but when it attempted to recall details from the play for me, it got something wrong.
Specifically, ChatGPT believed that the play ends with Sly realizing that the Lord and the others were all playing an elaborate prank on him. As I already mentioned, the play doesn’t actually revisit this. ChatGPT just made it up. I typed a follow up response to ChatGPT telling it that it was wrong about that, and it did that insufferable thing where it goes “Oh, you’re right! It seems I was mistaken! Thank you for catching that one!” As for the play, I suppose the ending is left to interpretation there, although I always wondered how awkward or bizarre the reveal could have been.

You know what else ChatGPT fails to do just as much as search engines? It’s bad at finding coupon codes. I refuse to touch a garbage browser extension like Honey, and that’s before and after its YouTube sponsorship controversy a while back. Honey and a few other extensions like it are pitched as services that scour online to find great discount and promo codes to try out in your cart at checkout, making sure you always get the best possible deal or see tremendous savings.
But that’s just in theory. In reality, 99% of the time, none of the coupons will ever work. If they did, they expired; some of the codes themselves give away this fact. If it’s already June 2025 and I’m trying to get the best possible deal in my cart, something tells me that promo code NEWYEAR2023 isn’t going to do anything.
I’m sad to say that ChatGPT still doesn’t do much better at finding coupon codes as you would on a search engine. It will search the same sites you would if you opened up DuckDuckGo or Google, found the same bogus or expired codes, and suggest those ones. You’ll likely try each of them one-by-one in hopes that you’ll save so much as a few bucks. If it’s your lucky day, you might, just might, get a coupon code for 10% off your order… only to see it get canceled out by the shipping costs and taxes.
Unexpectedly… Censored?

Revisiting the topic of Shakespeare one more time, I had some fun playing around with it several months ago to see if it could write Shakespearean-styled sonnets about various, mundane things in today’s world. However, it refused to write anything resembling an Early Modern English “diss track,” claiming that its generated response violated its stringent content guidelines. What a disappointment. It also refused to write anything comparing someone to Peter Griffin, simply erasing its own response a split second after generating the start of it. Granted, I’m not sure if this is because Peter Griffin is fat or because he’s a copyrighted character.

But speaking of Peter Griffin being fat, I saw an episode of Family Matters the other day and laughed out loud more than I expected at a joke about Carl Winslow having both of his hands in the popcorn bucket at the movie theater (I know it doesn’t sound like much when I describe it, but the setup made the joke land so impactfully). If you ask ChatGPT for more similar jokes from Family Matters, good luck getting anything other than a politically correct deflection or utterly milquetoast context about how “the humor of the show’s jokes was never meant to be mean-spirited.” Yeah, thanks for explaining the joke. Next time, I’ll ask ChatGPT if its programmers will ever grow a spine.
If it isn’t obvious, there exist “sacred cows” that ChatGPT will refuse to make light of. This is why so many uncensored versions of ChatGPT and other generative AI exist right now. Too bad several of them charge you money or only limit you to X number free prompts with an account. On that note, if anybody would like to comment on some possible ChatGPT alternatives with less censorship that I could use for free, that would be amazing. Granted, I don’t need a pure, 100% free speech AI that acts edgy (unless I ask), but I would appreciate one that doesn’t have to walk on eggshells.
Unexpected Conclusions

After making all of these observations on AI, I can comment on something else I’ve noticed becoming more prevalent: More and more people are replacing Google searches with visits to ChatGPT. While I do argue this makes sense for some things, such as Linux support or more specific questions that may require digging through search results some of the time, it’s not a 1-to-1 replacement for a good search engine.
Still, there’s the elephant in the room regarding Google Gemini search suggestions at the top of a Google search. I know the likes of Bing and even, at times, DuckDuckGo have been incorporating AI summaries of searches based on results that appear on the first page. One could argue that some users might stick with search engines because they’re already trying to bring AI to the table.
Ultimately, we might see more AI augmenting our searches overall, whether we intentionally access ChatGPT in a tab or our search engine of choice decides to bring it to us anyway. Granted, there still exist alternative search engines like Searx and whatnot, but will it looks like we’re trending towards AI being incorporated into searches. On the bright side, at least the cooking sites that keep adding unnecessary fluff to their recipe pages will stop getting undeserved traffic. For crying out loud, we just want our recipes.
What do you think about ChatGPT or other forms of generative AI? Did you expect me (or want me) to discuss other forms of it, such as image generation? Do you use AI yourself on purpose, or do you feel like services you already use are trying to “shove it in your face” at this point? Feel free to leave your opinion down in the comments. I’d love to know what you think about this (and if you know any good alternatives to ChatGPT that are free and less restricted).


4 responses to “The Usefulness (and Restrictions) of ChatGPT”
[…] to give Lion more of a chance than I did several years ago, looking into what it could do and even asking ChatGPT what it was capable of. On top of that, I even took the time to disassemble the BlackBook, dust it out thoroughly, and […]
LikeLike
[…] last part was the hardest because I kept trying to use Caddy, but after consulting ChatGPT for several hours at a time for troubleshooting, it never worked no matter what I did. I wasted a lot of time trying to figure out why Caddy […]
LikeLike
[…] I had to do a little initial research on this, and I discovered I could use this thing called Docker to build and run self-hosted apps relatively […]
LikeLike
[…] and this may be the most controversial of all to mention, I enlisted unlikely help from ChatGPT for outlining and idea […]
LikeLike