whalebeings.com

Exploring the Duality of AI: A Call for Balanced Engagement

Written on

Chapter 1: The Current Landscape of AI

Last year, I cautioned against relying on AI Large Language Models (LLMs) for content creation. Fast forward to today, and I've encountered numerous Medium profiles filled with articles that are clearly not penned by human hands. Unsurprisingly, the quality of these pieces leaves much to be desired, as I've previously discussed.

There are two predominant viewpoints regarding AI that concern me. The first is what I'll term the "techbro mindset" — the belief that AI is an extraordinary technology capable of anything. This perspective suggests that we no longer need writers, programmers, or artists, as computers can generate countless drafts based on any prompt. We can train machines to emulate specific writing styles or artistic expressions. With deepfake technology, we can even replace actors with hyper-realistic avatars, making it possible for individuals to become their own film, game, or art studios.

The second viewpoint is what I call the "doomer mindset" — the notion that AI outputs are a mix of dangerous and subpar content. This perspective argues that AI serves no real purpose, as companies and entrepreneurs, often led by misguided techbros, will misuse it to scrape the work of diligent artists, writers, and programmers, leading to widespread plagiarism and violations of licenses.

When one possesses a hammer, every issue appears as a nail. Conversely, every tool can become a hammer if misused (with apologies to Adam Savage). Both extremes present a grim outlook on life. The techbro mindset embodies a take-without-giving attitude, expecting even more in return. While it is feasible to profit quickly from derivative work, this approach is shortsighted, unsustainable, and unethical. Utilizing AI in this manner can have dire consequences, especially when generating critical information, such as medical advice or foraging tips. Unlike a well-structured expert system verified by knowledgeable human beings, this method often aggregates unreliable data, producing outputs that sound credible but may contain significant inaccuracies. Instances of this can frequently be seen in the news, and there's a well-documented collection of AI mishaps available.

The doomers then counter that AI is entirely useless and fraught with danger. I must respectfully disagree, as recognizing what AI is — essentially a language model that produces readable content that may be accurate or not, much like a daydreaming machine that seems knowledgeable but can also be wildly incorrect — allows us to leverage it for brainstorming, creativity, and exploration.

I advocate for a playful, inquisitive approach to this technology. I fondly recall my childhood days, exploring personal computers in the 80s, learning BASIC programming, and engaging with text-based games that used my input to continue the narrative. Titles like Zork and Guardians of Infinity to Save Kennedy were groundbreaking for their time. However, these systems differed significantly; while human input could vary dramatically, the output was strictly scripted. Properly trained and constrained AI tools could enhance the variability of such games, making them more replayable. I envision models trained not on random internet data, but on curated dialogues from skilled writers, allowing for a more diverse storytelling experience.

As a former computer science professor, I was intrigued by how AI would handle homework assignments posed to my undergraduate students. Sometimes it produced useful results, but often it generated code that failed to compile or didn’t address the problem. Some students might think they could simply ask the machine to do their homework, similar to those who copied from tutorial websites or peers. Without verifying their work or striving to understand the material, they would end up submitting flawed assignments and receiving poor grades. This is where the principle of "Garbage In, Garbage Out" (GIGO) applies to computer science, LLMs, and many other systems.

However, if I were a student using AI to clarify concepts like recursion or linked lists, and if I kept in mind the importance of verifying generated code (a caution applicable to any code sourced online, not just AI-generated), I could genuinely learn. No matter my level of experience, I would be the true expert, as I am the one engaged in actual cognitive processing and reflection.

AI can indeed be harnessed for generating code and applications, but this demands a skill set that most casual computer users or project managers lack. It requires clarity in articulating requirements and behavioral expectations, as well as an understanding of the machine's capabilities and limitations. For instance, I cannot expect an AI to derive someone's PGP private key from their public key or provide a meaningful proof of the P = NP problem.

Napoleon Hill, in his classic work Think and Grow Rich, distinguishes between two forms of imagination: synthetic and creative. Synthetic imagination involves remixing existing information to create something new. Creative imagination, on the other hand, arises from consciousness and inspiration, often leading to groundbreaking ideas without precedent. What we have developed with AI is a tool adept at synthetic imagination.

While synthetic imagination is not inherently negative, it lacks the essence of consciousness and true thought. We cannot program consciousness since we do not fully comprehend how it operates, and we may never achieve that understanding. The 25-year-old wager between philosopher David Chalmers and neuroscientist Christof Koch regarding the nature of consciousness ultimately favored the philosopher, as the scientific-materialist approach has yet to elucidate its workings. This suggests that consciousness is not merely a function of the brain but involves something greater and non-material.

We are neither progressing towards a technical utopia nor heading for a dystopian future akin to Skynet. Artificial Intelligence is not a golden key or a Pandora's box; rather, it serves as a creative tool capable of producing useful ideas or subpar content based on how we utilize it and what we input. Personally, it has reignited my childlike fascination with technology, and I'm eager to see the innovative creations that conscientious individuals can produce.

Ultimately, truth will emerge, and those who misuse technology will face the consequences, as seen with figures like Sam Bankman-Fried and Elizabeth Holmes. Those who pursue instant riches or seek unearned rewards will inevitably falter.

I prefer to focus on creating works that provoke thought and engage with tools that stimulate my intellect. Understanding AI and knowing how to use it allows me to interact with it confidently and without anxiety. Staying informed empowers me to embrace the future with excitement.

It's time to rekindle the joy of working with these machines and reconnect with the counter-cultural spirit that ignited the microcomputer revolution. My advice is to remain educated, share knowledge with others, and most importantly, enjoy the tools at your disposal!

Chapter 2: Embracing AI with Caution

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Navigating Medium: Insights from the Pub Crawl Experience

Reflections on five years with Medium and lessons learned from the Pub Crawl experience.

Embracing Uncertainty: A Transformative Journey of Self-Discovery

Discover the power of embracing uncertainty and self-discovery through personal reflections and insights on freedom and creativity.

Understanding Psychedelic Neurochemistry: A Critique of Pop Psychology

An exploration of how pop psychology oversimplifies neurochemistry and the role of psychedelics in understanding the brain's complexities.