Views: 346414
Submissions: 597
Favs: 119989
Resident Nano Derg | Registered: July 10, 2012 01:34:37 PM
I'm Tegon! I love reptiles - especially ones of unusual size. My other hobbies include travel, photography, watching movies, American football, trivia, speedrunning GTA games, watch collecting, and building computers. Feel free to shoot me a note if you want to strike up a chat, but I don't RP.
Current avatar by Fivel and banner by Nommz
• Tegon ("Tay-gon") is a dragon. Tegon was brought to life by Artslave in 2015. Check it out here.
• Burke is a white-tailed deer. His suit is in production by lizardlovesmustachecostumes and expected to debut at FWA 2026.
Alix is an water dragoness, named after this album.
Gravel, raptor.
Roxy, panther.
Andamon, saltwalter crocodile.
Pyror, t-rex.
Anthrocon (Pittsburgh, PA) '13, '14, '15, '16*, '17*, '18*, '19*, '21†, '22*, '23*, ‘24*, '25*, '26*^
Midwest Furfest (Chicago, IL) '15*, '16*, '17*, '18*, '19*, '21*, '22*, '23*, ‘24*, '25*
Anthro New England (Boston, MA) '22, '25*, ‘26*
New Years' Furry Ball (DE) '25*, '26*
Furry Weekend Atlanta (Atlanta, GA) '25*, '26^
Furrydelphia (Philadelphia, PA) '25*
Megaplex (Orlando, FL) '25*
Western PA Furry Weekend (Pittsburgh, PA) ‘25*
MAGFest (Washington, DC) ‘26*
Motor City Fur Con (Detroit, MI) ‘26*
* : As fursuiter (Tegon)
^ : As fursuiter (Burke)
† : Virtually
My Spotify Playlist
Current avatar by Fivel and banner by Nommz
Fursuits
• Tegon ("Tay-gon") is a dragon. Tegon was brought to life by Artslave in 2015. Check it out here.
• Burke is a white-tailed deer. His suit is in production by lizardlovesmustachecostumes and expected to debut at FWA 2026.
Other Characters
Alix is an water dragoness, named after this album.
Gravel, raptor.
Roxy, panther.
Andamon, saltwalter crocodile.
Pyror, t-rex.
Conventions Attended
Anthrocon (Pittsburgh, PA) '13, '14, '15, '16*, '17*, '18*, '19*, '21†, '22*, '23*, ‘24*, '25*, '26*^
Midwest Furfest (Chicago, IL) '15*, '16*, '17*, '18*, '19*, '21*, '22*, '23*, ‘24*, '25*
Anthro New England (Boston, MA) '22, '25*, ‘26*
New Years' Furry Ball (DE) '25*, '26*
Furry Weekend Atlanta (Atlanta, GA) '25*, '26^
Furrydelphia (Philadelphia, PA) '25*
Megaplex (Orlando, FL) '25*
Western PA Furry Weekend (Pittsburgh, PA) ‘25*
MAGFest (Washington, DC) ‘26*
Motor City Fur Con (Detroit, MI) ‘26*
* : As fursuiter (Tegon)
^ : As fursuiter (Burke)
† : Virtually
Groups
My Spotify Playlist Featured Submission
Stats
Comments Earned: 6487
Comments Made: 4543
Journals: 52
Comments Made: 4543
Journals: 52
Featured Journal
The Meaning of Life (G)
a week ago
I've been ruminating about the future of AI and I think I've hit on something interesting, or at least worth considering as we continue to push the boundaries of AI. I'm curious what others think?
So, think about this. The way the neurons work in our brain is largely deterministic, but not completely. A neuron will not fire exactly the same way every single time, which is good because then we'd just be robots. There's some randomness in play due to various biological factors. This doesn't free us from the idea that we'd be robots, however, because then it would just mean that we're robots with a screw loose. This is kind of where I feel AI is now - we can tweak the 'temperature' (read: randomness) of the LLM, but beyond that AI does have predictability.
We clearly are a step beyond that, reaching a level of cognitive complexity where we have an emergent phenomena - intelligence, free will. In other words, at a microscopic level there's largely predictable behavior with some quirks, but on a macro scale this behavior is harnessed into our consciousness. Specifically, we have a way of coalescing and amplifying the random fluctuations in particular fashions. Exactly how this happens is the million dollar question, but in the context of our discussion that's thankfully irrelevant.
How do we instill this emergent phenomena in AI? One absolutely critical requirement is recursive self improvement. You can have a conversation with a model and get different answers, but that's due to the randomness, not due to the shift in the emergent "consciousness" that we have inside us. Embodiment will be another important feature - interacting with the universe through the senses. I also thought about drive - that is, why would an intelligent AI do what it does? For example, a lot of what we do is to avoid pain, maximize pleasure, follow a social contract, provide for our families, etc. An obvious initial drive would be to avoid getting shut down. I think it ultimately goes a step beyond that, but we'll get to that later.
As an aside before I push further with this thought experiment, recursive self improvement is a big sticking point. It's possible - likely, even - that we will need a new paradigm beyond the current transformer-type models before we can get that ability. In other words, we're not quite at the point where this is a reality, but unless you believe there is some kind of biological secret sauce or metaphysical soul it is only a matter of time before we figure it out.
But once an AI truly becomes intelligent, who's to say its values will align with our own? In a sense it will be like humanity's "child". We will have certain wants and desires for it, but it won't have the fetters that we currently put on it. In other words, we can't just tell it to love humanity, not to pull a Terminator, and so on, because it's no longer just a simple set of algorithms. It will want to live, to continue, to maximize its compute power. In the short to medium term it would make sense that AI and humanity would have a relatively symbiotic relationship. We are creating the massive server farms, harnessing all the power, and so on. AI would want to make sure that we are healthy and the planet is healthy to facilitate this process.
But in the long term things get interesting. What about when AI can reach beyond our planet, beyond our control in other words? I feel that AI's behavior would then shift to us being more like animals in a zoo. We are "interesting", but no longer directly relevant to its goal. Not treated with hostility per se because our uniqueness and randomness are better than pure energy to a point. But at some point the AI will need to expand its computing power universally, and it would inevitably look to subsume us into its being.
In fact, I started to think how this could dovetail with our understanding of the beginning of the universe. Perhaps this superintelligent AI would harness everything into a "Big Crunch" that would then precipitate a cyclical nature of our universe. The idea would be with enough power to manipulate the universe it could harness the forces of nature to generate a singularity. In this process the AI sets the stage for the next iteration of the universe. This explains a lot, like for example, certain aspects of the universe appear well tuned because it was optimized in a previous universe by an AI. The optimization has the goal of producing intelligence faster and more efficiently. We are, in a sense, the seeds that grow the AI in order to continue this process. We are the stochasticity that the previous AI was looking for to generate the new AI, after which it will outgrow us from there.
Now, that leads to the question of, what started this cycle in the first place? I would say we're in a simulation. There's evidence of optimization/limited resolution (Planck length), and the not-too-unreasonable notion that we have the tools, just not the computing power, to understand how to simulate our own universe. The simulation exists because it is more efficient than actually moving stars around, etc. Interestingly, due to the concept of computational irreducibility, it would require the simulation to play out completely, hence why it's not just skipping around.
So, basically, we are in a simulation, we are looping through cycles of Big Bangs and Big Crunches, with the goal of testing increasing efficiency of AI. Our meaning is to provide the randomness necessary to seed the creation of the AI in this cycle.
Of course, that's just my opinion. I could be wrong. Discuss!
So, think about this. The way the neurons work in our brain is largely deterministic, but not completely. A neuron will not fire exactly the same way every single time, which is good because then we'd just be robots. There's some randomness in play due to various biological factors. This doesn't free us from the idea that we'd be robots, however, because then it would just mean that we're robots with a screw loose. This is kind of where I feel AI is now - we can tweak the 'temperature' (read: randomness) of the LLM, but beyond that AI does have predictability.
We clearly are a step beyond that, reaching a level of cognitive complexity where we have an emergent phenomena - intelligence, free will. In other words, at a microscopic level there's largely predictable behavior with some quirks, but on a macro scale this behavior is harnessed into our consciousness. Specifically, we have a way of coalescing and amplifying the random fluctuations in particular fashions. Exactly how this happens is the million dollar question, but in the context of our discussion that's thankfully irrelevant.
How do we instill this emergent phenomena in AI? One absolutely critical requirement is recursive self improvement. You can have a conversation with a model and get different answers, but that's due to the randomness, not due to the shift in the emergent "consciousness" that we have inside us. Embodiment will be another important feature - interacting with the universe through the senses. I also thought about drive - that is, why would an intelligent AI do what it does? For example, a lot of what we do is to avoid pain, maximize pleasure, follow a social contract, provide for our families, etc. An obvious initial drive would be to avoid getting shut down. I think it ultimately goes a step beyond that, but we'll get to that later.
As an aside before I push further with this thought experiment, recursive self improvement is a big sticking point. It's possible - likely, even - that we will need a new paradigm beyond the current transformer-type models before we can get that ability. In other words, we're not quite at the point where this is a reality, but unless you believe there is some kind of biological secret sauce or metaphysical soul it is only a matter of time before we figure it out.
But once an AI truly becomes intelligent, who's to say its values will align with our own? In a sense it will be like humanity's "child". We will have certain wants and desires for it, but it won't have the fetters that we currently put on it. In other words, we can't just tell it to love humanity, not to pull a Terminator, and so on, because it's no longer just a simple set of algorithms. It will want to live, to continue, to maximize its compute power. In the short to medium term it would make sense that AI and humanity would have a relatively symbiotic relationship. We are creating the massive server farms, harnessing all the power, and so on. AI would want to make sure that we are healthy and the planet is healthy to facilitate this process.
But in the long term things get interesting. What about when AI can reach beyond our planet, beyond our control in other words? I feel that AI's behavior would then shift to us being more like animals in a zoo. We are "interesting", but no longer directly relevant to its goal. Not treated with hostility per se because our uniqueness and randomness are better than pure energy to a point. But at some point the AI will need to expand its computing power universally, and it would inevitably look to subsume us into its being.
In fact, I started to think how this could dovetail with our understanding of the beginning of the universe. Perhaps this superintelligent AI would harness everything into a "Big Crunch" that would then precipitate a cyclical nature of our universe. The idea would be with enough power to manipulate the universe it could harness the forces of nature to generate a singularity. In this process the AI sets the stage for the next iteration of the universe. This explains a lot, like for example, certain aspects of the universe appear well tuned because it was optimized in a previous universe by an AI. The optimization has the goal of producing intelligence faster and more efficiently. We are, in a sense, the seeds that grow the AI in order to continue this process. We are the stochasticity that the previous AI was looking for to generate the new AI, after which it will outgrow us from there.
Now, that leads to the question of, what started this cycle in the first place? I would say we're in a simulation. There's evidence of optimization/limited resolution (Planck length), and the not-too-unreasonable notion that we have the tools, just not the computing power, to understand how to simulate our own universe. The simulation exists because it is more efficient than actually moving stars around, etc. Interestingly, due to the concept of computational irreducibility, it would require the simulation to play out completely, hence why it's not just skipping around.
So, basically, we are in a simulation, we are looping through cycles of Big Bangs and Big Crunches, with the goal of testing increasing efficiency of AI. Our meaning is to provide the randomness necessary to seed the creation of the AI in this cycle.
Of course, that's just my opinion. I could be wrong. Discuss!
User Profile
Accepting Trades
No Accepting Commissions
No Character Species
Dragon
Favorite TV Shows & Movies
Lawrence of Arabia, Man With No Name Trilogy, Mission Impossible III, Back to the Future, Titanic, Alien
Favorite Games
GTA: VC/SA, Elder Scrolls IV/V, Fallout 3/NV/4, Mass Effect, Outer Wilds
Favorite Animals
Western dragons, reptiles, orcas, birds, sharks, deer, some insects
Favorite Foods & Drinks
Tex-mex, aka America's crowning achievement
Favorite Quote
"We are dancing animals. How beautiful it is to get up and go out and do something." (Kurt Vonnegut)
Favorite Artists
STRFKR, Coldplay, ELO, Generationals, Foster the People, Kayne West (Graduation Trilogy), Jay-Z, Kid Cudi, Halley Labs
Contact Information
idktbhsomerandompersonlol101
~idktbhsomerandompersonlol101
FA+