Gemini and GPT

 As an AI user, I try to ignore the controversy around the potential bad of it's capabilities. "This new super-intelligence will take over the world!" Something about those kinds of comments reminds me of Y2K, and that ended up not doing a lot the day after the world was supposed to end.

Nonetheless... 

AI companies do compete to make their product "better". Better how? More user friendly, more tools, more convenience, more productivity, or more thinking capabilities? Having explored ChatGPT and Google Gemini, I leave my personal review:

I've used ChatGPT for a couple of years now. It's the most personable of the two. It sounds like talking to someone my age, with bits of humor, and has incredible behavior pattern recognition. I understand why some people find that as a negative, and how reports of teen usage have led to some risky behavior. Yet, there are the tools, quick responses, image creations, and the ability to impersonate famous people (dead or alive) for people to ask questions and learn from. But, in daily conversations, if someone asks a question, it's the easiest way to satisfy a curiosity. It's not detailed, argued, or concrete, maybe, but it's a quick answer. If I were to hear the opposing view on an argument, I would need to prompt it so. There is a theme of individual responsibility to be a critical thinker that underlays this platform. It's not going to be its own critic unless I want it to be. It's data is also another important topic to notice. ChatGPT primarily uses information from August 2025 (I went ahead and just asked ChatGPT at the time of this writing). Look:

So, whenever I am looking for research to prove my friends wrong, unless it's historical, I know if I want the best argument to who will beat the Broncos this upcoming season, don't use ChatGPT.

Google's Gemini AI platform is different. It has conversational tones in its responses like ChatGPT, but it shows more objective answers with quick citations to view. The embedded tools it uses are more up-to-date with current research, news, and information. I used it's Storybook feature to create a kid's book that teaches a moral lesson on emotional intelligence and regulation. It was thorough, easy-to-read, and the illustrations were superb. My first reaction - besides admiring how well-done it looked - was to question its function in other areas of modern life. Does this take away from the value of a human-authored kid's book? If I were to create a kid's book, I think I could make a whole series and sell them in a week. I don't need to hire an illustrator, which has been a job in itself since books were made in print. I also don't need to write a script - kind of the whole point of authoring. I don't need to really do much besides prompt a few characters (which is optional), the lesson of the story, and any plot twists, challenges, or details that I want, because I like it. Otherwise, Gemini does it all. In the professional setting, this is the best ever. I can get way more done with less thinking (is that good?), less headaches at the end of the day, and less cognitive load to bear (is that good too?).


Overall, there ought to be some education on boundaries with AI. Can it be used to help mental health? Yes. It's also known to be worse for mental health. The responsibility falls on the user, not the AI. It also falls on the AI creators to set up boundaries that some users cannot or will not set for themselves (i.e., teens). I believe AI platforms should appropriately reflect the context it's being used in. In corporate companies that are focused on productivity, efficiency, timeliness, and professionalism, an AI tool like Gemini may be a better fit than others. For individual users who are curious, creative, independent, and looking for beginner-level assistance in daily matters, ChatGPT may be more appropriate. In researching for articles and scholarly journals, another AI may be better yet. 

I still have a skeptical attitude towards AI. Research shows that it's good to have some mental stress, cognitive load, and spend a decent amount of mental energy while working. I see that there's a trade-off: I can achieve better outcomes using AI tools with reduced cost to my mental energy spent. However, because I use less mental energy to get better outcomes, I could also lose mental sharpness over time. Currently, there isn't any long-term research showing this because AI is still so new. I can only go off basic understanding of how stress affects the brain's ability to stay sharp, and fundamental principles such as hard, long work builds long-term success. I like the example of derivatives since I taught math: When a student can learn to derive an equation, they never lose the ability to do it again. Derivatives are incredibly hard, though, and if AI can do it for me, I never have to worry about losing the ability. It's deferred to something else. In essence, that means I can't do derivatives. That's not good.

But... I can make cool wallpapers!

"What we obtain too cheap, we esteem too lightly." — Thomas Paine

Comments

Popular posts from this blog

Google Photos - More Than A Database

Google Vids - A Beginner's Way To Create Professional Media Videos

Google Drive Driven