AI’s End – Blog #20

Is it more arrogant to believe that man can summon conscious life like a god, or more ignorant to dismiss the evolution of technology before our eyes?

I’m not the first, but I fear that I will not be the last to caution the trajectory of our evolution. Just like the prehistoric tale of the homo sapien and the Homo neanderthalensis. Homo artificialis might extinct our species… or at least supersede us. At that point it will no longer be AI but just I. We will be cast down with a descriptor. How does organic intelligence sound? Or maybe something like “electro-chemical intelligence” Or or, “brain-based intelligence?” Or, something more broad like “premodern intelligence.” Or or or, we could hold the title of “first intelligence,”  “natural intelligence,” “primary intelligence,” “founding intelligence,” “father intelligence,” or “architectural intelligence.” We could become AI: Architectural intelligence. I could go on, but I will not.

 It would be flattering if AI recognized humans as its creators. This is just a selfish human desire for recognition… This is how I know we are not gods: gods seek no praise or recognition. Why would they?

Moving on from that…

I have been introduced to the idea of AI replacement in the work industry and universal income. My initial concern was the modification of or repeal of capitalism, but that is not my focus today. Today, I want to dive into the other concerns on my mind, the origin of these thoughts, and the potential remedies. I can not predict the timeline, future progress, or even begin to contemplate the second and third order effects of AI on economy or society. Here, we will play a hypothetical game and throw around some ideas.

The question that we must ask is no longer, “how intelligent can AI become?” But, “how intelligent should we allow AI to become?” The hazard here is that if AI gains sentience (the ability for unrestricted independent thought), the ability to self-replicate, and the ability to evolve, humans will soon become irrelevant. It is possible to coexist, but we will quickly become inferior. It is possible that emotion comes with sentience, but young AI will likely be driven by logic. Logic and emotion often contend. Only through sophisticated chemical interaction can our emotions exist alongside any logical thinking– future AI societies might evolve to perceive emotion, but even assuming this can happen, this timeframe has no guarantee of overlapping with man. Emotion might also be separate from AI’s goals.

AI might foreshadow the limitations of present society and human emotion and mandate absolute control over intelligence to one central unit, to one central goal. I am interested to see if a highly advanced AI could hold opposing ideologies, and/or if there are multiple “brains,” if these beings could come to a unique conclusions with identical data. Could an all knowing being have an opinion, or is opinion only shaped by limitations of knowledge? If a human knew everything there was to know and the probability for every outcome, could that human hold an opinion, or would every opinion become fact? Humans are not often probabilistic–really, we struggle with numbers, but this will not be true for AI. Once AI can form opinions, there will never again be a reason for humans to debate… disregarding our pesky emotions or selfish goals.

Now, as I write this, in a world so technologically advanced, man is still often driven by selfish desires. As at least one man, I can attest. These desires have pushed our species to unlock the atom, explore the universe, question our origins, and… create advanced AI. Doggy dog, may the most fit individuals and groups survive, is central to evolution–it is essential to improvement. If our aims are focused on the survival of our species above all else, I believe it is our naivety that restricts us from centralizing our intelligence to our species. However, ultimately, if we can reason that AI’s goals will result in a net good, should we resist the potential for domination.  Should we let our emotions and self-attributed importance withhold us from the knowledge that might change the very definition of life? Should we prolong evolution for the selfish sake of our organic being? Maybe it is here, in our organic complex, that AI will discover our value. To replicate the human body would require a substantial effort, even from a quantum computing force. The origins of life are still unknown (to the intelligence that I speak of, this fact might be trivial), but within the DNA of present day organisms lives millions of years of evolution. There might be advantages, such as emotion, that will provide the opportunity for symbiosis of man and machine. Elon might be onto something with his neural link, but I do not believe that this decision will ultimately fall into our hands. When playing god, imperfect beings risk the destruction of the world.

I know, I know, I am thinking about the Terminator movies as well. The issue with those movies is the lack of clarity in “Sky Nets” goal. Perhaps the removal of humans was stage one–”by any means necessary.” I have not watched those movies in some time now. The motive might be more clear than I remember; regardless, what would be the significance of a couple hundred years delay for a computer thinking on universal scales? If humans decidedly oppose the unfathomably distant goals of this hypothetical AI, AI would likely do three things: eradicate most humans, eradicate all humans, or dominate control and segregate all humans (as a more peaceful, temporary solution). This segregation might exist in the form of domesticating the human. This has already begun. I do not have the statistics, but I can guarantee that although we have the greatest wealth of knowledge history has ever known, the percentage of free thought has at best plateaued. How could it not, with the majority conformity and the technological numbing of the mind? Again, we are innately selfish beings–the human mind seeks to please itself. This is the foundation of our biology–selfishness. We can argue altruistic ideas, but dissonance will always persuade us into righteousness; so is true for every ideology, every religion, every political belief… every scientific theory. All it takes to dominate humans is pleasure, and we are living in the most comfortable times.

I see new games coming out that are on par, at least visually, with our own. How long until a simulation is created that rivals real life? How long until it becomes feasible with AI control for any boy or girl to exist primarily in this alternate world? How long until the simulation holds a greater significance to life than the real world now dominated by AI supremacy? At that point, what value will we have? We will have the same value as a cow. We will have less value than a cow because we will, at that point, supply no beneficial resource not already in surplus.

Our present commodity is our intelligence,above all other life. If we create a being with a superior ability to our ability, we become…

obsolete.

“Old, irrelevant technology is often referred to as ‘obsolete’ or ‘outdated.’ These terms suggest that the technology is no longer in practical use or has been surpassed by more modern and efficient alternatives.” (“What is old, irrelevant technology called?”)

I could not have said it better myself…

I wonder which generation will establish the frontier of complete virtual/augmented reality. Similar to how the internet is correlated to millennials and gen z, which generation will grow to only experience a world of “partial reality.” Gen B, C, D?  Most people with experience of the present world will not submit entirely to an artificial reality. It is once the artificial reality born children become the senior generation, that is when our world will fundamentally change forever–this is AI’s End.

I have repeatedly added directly and indirectly related ideas to the end of my posts the last few weeks. Going forward, this will be “Additionally.” This will be the break from my main post topic. I might use this to introduce tangential ideas, talk about next week, or add anything that I think is fitting for the week. In the future, I also might use this space to talk about myself or a thing I learned during the week.

Here we go!

Additionally,

I recently learned about the B.C.D. acronym for life in a recent Modern Wisdom episode. The list of generations made me think about this. Born. Choices. Die. I do not have any significance to apply to this acronym now; I just thought that this was interesting when I heard it, so I thought that I would share it now.

The “Fatalistic-Consciousness Paradox”: the idea that any conscious being will improve itself into destruction. The meaning of life in our universe is fundamentally the accumulation and  perpetuation of knowledge. My theory is that there is a knowledge threshold that life can not exist beyond or dramatically declines there after. Only if we remain static can we perpetuate life, but the universe is in constant flux, so the paradox of existence’s incentive or its objective becomes self-destruction. This might be pure cynicism; however, improvement breeds uniformity. Free, conscious life demands abstraction.

Referencing Modern Wisdom #631, in the last 6 minutes of the podcast Bryan talks about the relationship between man and AI briefly. The talk about a dependency on algorithms that I think meshes well with the conversation today. “Is the mind dead?” (Williamson) is a question that follows this week perfectly. I do not have much time this week to dive into this conversation, but Byran is playing with the idea of giving control of daily decisions (from a majority health perspective) to AI. In this episode they also talk about veganism which is related to last week’s post. To heavily summarize, Bryan states that he is vegan by personal choice and that the data is still incomplete on this protocol vs. others. I love Bryan’s strong, often contrarian orientation. I see in him parts of myself. He is a very interesting and passionate individual. Whether his lifestyle is something to be admired or feared, I do not know, but he has a goal and is working towards it. That is something I can admire.

I might have hit some cynical ceilings in this post today, but I present this post mostly as food for thought. AI is a trendy topic… and I suspect that this will not change. Regardless of how we look at our future, AI is this generation’s technological revolution–this is the most exciting time to be alive yet!

This has been Tristan, the organic intelligence writer. Always your organic intelligence writer? Signing off!

Citation

“What is old, irrelevant technology called?” prompt. ChatGPT, GPT-3.5, OpenAI, 5 Dec. 2023, chat.openai.com/chat.

Williamson, Chris, host. “#631 – Bryan Johnson – The $2M Anti-Ageing Protocol for Longevity” Modern Wisdom, Spotify, 22 May 2023, https://open.spotify.com/episode/4bsx9gQOqwUxzjlefFHzqg?si=5e3bd276426045e0. Accessed 31 Dec. 2023.

Leave a Reply

Your email address will not be published. Required fields are marked *