What you guys think about it today?
I just started studying AI on my own few months ago and maybe the gold post for the global consciousness / archetype about it moved a lot, but I can see easy answers to all your questions there. I could only see 2 main themes.
(1) Singularity based on Moore still looks quite possible to happen as Moore continues to be reliable even today. To me it only means that computers will get faster and at some point they must be fast enough for a single personal computer becoming able to emulate the human brain as a whole. That's not when the singularity would happen yet, though. And that's just 1 way it might happen.
(2) If a sentient intelligence would just go away on its ways and completely ignore us, we would just keep trying to create new ones that wouldn't ignore us. So, that's mostly irrelevant to think about. It may have already happened and we wouldn't know.
(4) Since there was no 3rd theme, I'll add one here: what if a conscious software were already possible to be created today? From my perspective, the main reasons it doesn't exist yet is wrong development approaches, such as trying to be too careful.
The internet as a whole is already much more powerful than a single human brain, even if it isn't 1 concise quick organ communicating as fast as neurons.
And trying to control such a powerful weapon can be catastrophic IMHO. We may have only 1 shot of making the SuperIntelligence something that won't destroy us and, from every science project ever made, we know humans don't get it right on the first shot. It should be developed in a way as to not offer us any control.
Now, this is yet just my own theory, but I do have lots of more reasoning as to why, which may still need to be written down.


