Jump to content

AI and the Fermi Paradox


Recommended Posts

ha, as usual humanity is just truckin along and when the results of our tinkering are in front of us, we don't have a clue what we should do.  Like trying to figure out how to make nuclear power work for us before we kill each other with it. Thing about ASI is, when it gets that smart it likely won't even notice us anymore than we notice a flea.  Being afraid of it is a little presumptive.  As usual we rate our importance in the scheme of thing a bit too high, Unless the ASI says "what the hell are those monkey's playing around with.. and in the first nano second of it's existence it zaps us back to the stone age.  Or it doesn't matter as suddenly all our computers are doing something else and couldn't give a rat's ass about us.  So we don't go extinct, but we have to go back to a non networked world.

 

The other side of that balance beam thing though is weird.  Immortality to me leads to insanity.  If I had all the time in the world and I no longer had to eat to survive...  what the hell would I do with that time?  Right now I am looking forward to retirement.  A few years to tinker around doing things I like with no real schedule before I get too old to do any of it.  I like that it is a limited window, I'll actually appreciate it.  And I am not particularly worried about dying.  It'll happen, I have had a good life and when the time comes, I'm okay with that.

 

Immortality though, what do I do with that?  How do I relate to another human being?  I don't think I want to live in that world.

 

I do know this much - my cat could care less about this whole discussion - it has a turd stuck to it's butt and is busy racing through the house wondering what is attacking it.  Will an ASI understand laughing at something like that?

Link to comment
Share on other sites

Good point that the ASI might just mostly ignore us, in the same way that we mostly ignore ants, unless they come in our house. The thing is, we really have no idea what it would, or could, do.

Prior to reading these articles, I had never heard of "The Great Filter" theory. ASI may be our Great Filter....our extinction event. Nuclear weapons almost were, and possibly still could be, I suppose.

Link to comment
Share on other sites

  • 4 weeks later...

Well, the thing is - computers are not getting much more powerful anymore. Moore's law is pretty much dead in the water. CPU gains have become smaller and smaller with each generation for at least ten years.

 

They are still working to squeeze the lemon a bit more, but we are not talking about doubling computing power every couple of years any more. At best, we are talking 5-10 percent for every new generation. And the generations are coming slower and slower.

 

So, no need to fear AI. Unless there's some sudden (and very unexpected) massive breakthrough.

Edited by Bulletpoint
Link to comment
Share on other sites

I just wrote you out a long and detailed reply, then accidentally clicked the "back" button on my browser, erasing everything!

 

Basic point is that while Moore's Law is still making chips smaller, it's not making them more powerful.

 

Here's just a couple of links without my commentary.

 

http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck

 

Important graph in that article:

 

1612332_orig.jpg

 

That graph only goes till 2010. And updated version is found here:

 

http://www.karlrupp.net/2015/06/40-years-of-microprocessor-trend-data/

Edited by Bulletpoint
Link to comment
Share on other sites

You're welcome. Here's another link. The recent CPU releases from Intel have seen quite small performance gains, and the latest, "Skylake", is no exception:

 

 

Intel’s Skylake Core i7-6700K reviewed: Modest gains from a full Tick-Tock cycle

 

http://arstechnica.com/gadgets/2015/08/intel-skylake-core-i7-6700k-reviewed/

 

 

And here's another link to the same site. Link title says it all, really:

 

 

Intel confirms tick-tock shattering Kaby Lake processor as Moore’s Law falters

 

http://arstechnica.co.uk/gadgets/2015/07/intel-confirms-tick-tock-shattering-kaby-lake-processor-as-moores-law-falters/

Edited by Bulletpoint
Link to comment
Share on other sites

  • 2 weeks later...

So what do you think happens next? Have we reached the point of diminishing returns with current microprocessor technology, until the next big breakthrough? Is our mature tech steam engine on the verge of being replaced by a newfangled internal combustion engine?

 

Definitely we reached the point of diminishing returns, but a bit more performance can still be squeezed out of the silicon. They're refining all kinds of tech surrounding the actual processor as well, yielding small improvements to various stuff, and power savings.

 

But it's far from the exponential growth needed to reach scary AI levels. Maybe we should be happy for that :)

 

It's an open question if there will ever be a "next breakthrough". We got used to a world of rapidly evolving technology, but it wasn't always like that, and it's not a sure thing that it will continue like that. It's like the space race. People assumed that since we got to the Moon in just a few years, we'd be on Mars a few years later, and then go on from there. Didn't happen.

 

My take is that computers won't evolve that much from now, technologically. If you buy a new CPU today, it will last you at least a decade. But we'll get better at using computers, and they will continue to become more common. Once they are built, those huge chip plants can keep churning them out, leading to lower cost per unit.

Link to comment
Share on other sites

sburke,

 

Have you read Time Enough for Love and the other Lazarus Long SF novels by Robert Heinlein? He posits a future in which the very wealthy can effectively make themselves immortal via transplants, regeneration and the like, but then looks at the consequences of such longevity, especially from the standpoint of all the people the protagonist knows who can't do what he does; people whom he has to watch age and die. He gets to the point where he's BTDT so many times he's just over it. This is why suicide is the highest human right in his society. You might also find the Casca novels by Barry Sadler (yes, the one who wrote "Ballad of the Green Berets") apt, too. Casca Rufio Longinus was the guy on the Crucifixion detail who had an encounter that kept him ever a soldier until Kingdom Come. Like Lazarus Long, he has to go through the same awful cycle of watching loved ones come and go--except he can't die no matter what he does to himself or what happens to him. Vastly worse plight.

 

Regards,

 

John Kettler

Edited by John Kettler
Link to comment
Share on other sites

John, I read the Lazarus Long books years ago. Recommended.

 

On the subject of limited AI, or AI that are designed to perform a specific task, I was searching for chess software this week and discovered that there are *free* chess programs available that have an ELO rating far higher than the highest human grandmaster. I remember when it was a big deal when a computer played Kasparov to a draw and even beat him, IIRC. This was a specially designed computer with far greater processing power than a typical home PC at that time. Now it seems that anyone can download a free program that, run on a $500 laptop, could outplay the world's top human players.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...