FlowerChild wrote:Don't get me wrong, I'm happy to observe or participate in a lively debate on the topic. Just given the wording, specific examples you provided, and history involved, it was hard not to take it as a jab / mild troll. No biggy, just a reminder that it's a topic near and dear to me.
Didn't cross my mind at the time. I'll try to explain a little, using parts of posts as an example:
Sarudak wrote:Now imagine we produce superintelligent AI (which seems like a near certainty to me) and computers and robots are so good at every task that they can completely out-compete humans at everything. Now assume that the AI remains benevolent and everyone gets to partake in the massive production surplus. Many people like to assume that humans then will be free to create and invent. Except the one problem that AI will be better at that too. Will we actually paint and compose music when the AI does it better than us? Will we bother trying to invent anything when AI is so much smarter and faster than us that it has already thought of the things we would try to think of? What will be left to humans other than the pleasures of leisure with no real meaningful accomplishment? Might as well stick a wire in my brain and turn on the happy.
I don't worry about things like this. Of course we'll produce a superintelligent AI. We already have AI that beats humans at about every complex thinking problem, except for pattern recognition. What we're seeing with the latest batches is truly creative AI. AI that has "the creativity spark" we consider so singularly human (and maybe slightly available to some primates).
What these debates come down to to me usually:
1) We don't actually need the new tech to commit the suggested doom scenario:
The mosquito thing above is a good example. CRISPR/CAS9 is bad because it could create new deadly diseases? We've been able to weaponize diseases for centuries now (medieval catapults firing rotting corpses into cities for example) and we've been able to create deadly diseases for decades now. CRISPR/CAS9 lets us target ethnic groups maybe? So what? The truly bad people will kill people regardless. All Stalin needed to kill a dozen MILLION people was just plain old hard work. Columbus murdered an entire population basically by himself (and not accidentally through disease, this douche deliberately murdered an entire population of a whole region of the world). Pol Pot went so far as to destroy any trace of cultural identity in his genocide. Leopold II bought an entire region of Africa as private property and murdered more people than Hitler.
2) We're a lot further ahead scientifically than people think:
Your smart watch is able to beat Kasparov in chess now. Not just beat, humiliate. People said just a few years ago that in 20 years, computers might be able to beat top players at Go. All it took was someone to really focus on that. We probably could've done it earlier. In about 3 years, your smartwatch will probably be able to do it too. The Jeopardy AI called Watson basically read a few thousand books (actual reading with comprehension, as humans do) per second to beat humans at Jeopardy. When AI programmers got bored of beating world class chess players, they started playing AIs against each other. In the beginning, AI beat humans through sheer positional play, but what they'd never ever be able to do is play truly creatively like Bobby Fischer or Mikail Tal, right? Wrong. The new AIs that play other AIs need an edge, so they started doing deep, narrow searches (as opposed to the normal shallow, but broad searches chess AI used to beat Kasparov) to get creative moves that can outperform the other AIs. Turns out the moves they come up with are straight out of Tal's playbook, only they do it in a game that is already positionally perfect. Neural networks similar to the Go AIs were created by Google to try and solve the image recognition problem. They haven't solved the problem yet, but what they did "accidentally" do is create an AI that can paint creatively. There goes another thing that was supposed to require a human consciousness to do.
3) We can't actually predict the future, so speculative doomsday theories are usually way too specific and archaic:
Steam engines were going to be the end of humanity, because trains were scaring cows and that would cause the global milk market to collapse. Instead, what they did was create a billion applications that transformed the entire world into one that would be unrecognizable by those people. The problem is, the total amount of knowledge humans have doubles every X years (3 IIRC). It DOUBLES. At this point, there's so much more knowledge than a single person can even comprehend, that predicting 2 years into the future is futile. When I look back at the 90's, I see a completely different world in many ways. The only reason it stays conceivable is because humans don't adapt to technology as fast as we think. When Hitler decided to leverage technology, he accidentally caused us to go into space (V2 rocket program), because he was batshit crazy and took away that adaptation time. This ties back into point 1. A truly scary fuck of a person can do things we can't even conceive with or without the tech.
So that's why I don't worry. Not because I don't believe we can't destroy ourselves, but because I believe we always could, going back decades, even centuries. Inherently humans have always had that power, ever since our brains expanded, because we can actually look at any system we are stuck in and utterly break it. So, there's two options: either we destroy ourselves quickly, through a doomsday scenario and I can't do anything to change that. We're waaaaay ahead of the point where we could do that, so there's no way that stop that. The second option is that I assume we don't and act accordingly. I try to stop the slow doomsday scenarios (like climate change) and try to work locally, stopping plain old humans from being dicks to plain old humans. It also means I encourage potentially dangerous technologies such as these, because it could actually help me with that second option, while only very marginally making the first option more likely (if at all, again, Stalin could have destroyed us with WWII era tech that looks laughable now).
The last point I'd like to mention is that I'm really emphatic. When I see a person suffer, my heart breaks into a million pieces. I devote large chunks of my free time to charitable work. But we don't need new technology to make people suffer, we can do that just fine with a pointy stick. Chimpansees know the concept of genocide and war, so even going back to cave times, like some hippies suggest, wouldn't change that. CRISPR/CAS9, atomic bombs or steam engines won't destroy humanity, humans will. And they don't need any of those techs to make it happen.