all 25 comments

[–]ID10T 5 insightful - 1 fun5 insightful - 0 fun6 insightful - 1 fun -  (2 children)

I firmly believe AI should be strictly regulated. It's potential is as dangerous and powerful as nuclear weapons and atomic energy.

[–][deleted] 4 insightful - 1 fun4 insightful - 0 fun5 insightful - 1 fun -  (1 child)

Yup. The moment this is used in warfare, we are in trouble. AI Automation of military strategy, drone bombers, military vessels and arms would be disastrous.

[–]Zapped[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

AI has been used in board games like chess and go. While there is a learning curve against a particular opponent, it does indeed learn how to not lose. It seems like it learns the way humans think.

[–]HongKongPhooey 5 insightful - 1 fun5 insightful - 0 fun6 insightful - 1 fun -  (2 children)

I'm glad you asked. I think there are more consequences to consider besides the matrix-lite scenario of techno-governance you are imagining, although I think that is a real danger too. Here's what I think will happen:

Advances in AI along and robotics will allow the elite capital owners to replace the vast majority of the workforce. This will be the first time in human history where the elite class does not actually need the labor class in order to maintain their lifestyle.

With the vast majority of the country unemployed, and a handful of billionaires producing everything with machines, the politicians will institute UBI and pay for it by taxing the billionaire producers. However, the billionaires will soon tire of paying for the rent and groceries of the useless eaters, and because they are the ones financing the campaigns of the politicians and paying the taxes, they will soon find a way to get rid of us, and keep the spoils of their automated machine labor for themselves.

The result will be a culling of 90% of the world's population, and the institution of a totalitarian techno-state for those that remain, although they will likely have plenty of material wealth

[–]Zapped[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

I think there's already become somewhat of a ruling class versus everyone else. If UBI is instituted, it will be something used to keep this arrangement in place.

All technology will eventually become cheaper and available to the masses. The producers will only have this advantage for so long before the rest have it. Why not have your own AI and robots to harvest food, energy, and raw materials and then converting that to useful items. I guess the only thing holding back is the mineral rights issue.

[–]FuckYourMom 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

Yep. That’s it.

[–]In-the-clouds 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

AI is creating images that humans have difficulty discerning, whether they are an image of reality or not, so AI adds to the confusion of the flood of propaganda we already face. This is why I walk by faith and do not automatically believe what I see with my eyes.

2 Corinthians 5:7 (For we walk by faith, not by sight:)

And the world is being set up to worship a new god, an image... that talks. Sounds like AI to me. And I used to think man would never again be so deceived to worship idols, images made by the hands of men, but here we go again...

Revelation 13:15 And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.

[–]chickenz 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)


[–]IkeConn 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

Grey goo and death.

[–]Zapped[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

I can see that.

[–]EternalSunset 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

AGI will probably be the most powerful invention in the history of mankind, but we are still decades away from it. I recommend you look into the videos made by Robert Miles, he is a computer scientist who specializes in artificial intelligence and he said a lot of things regarding this topic:

[–]zyxzevn 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

[–]Zapped[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

Nice article. I forgot about the role imagination plays in intelligence.

[–]FuckYourMom 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (3 children)

It won’t lead anywhere good. Just listened to a C-SPAN talk about this today. And computers are more racist than us, by a lot. They are cold and calculated.

Type of shit, like it will run the stats, and see blacks commit all the crime, and genocide them.

They are racist and it’s incredibly hard to program for them to not be like that .

[–]Bigs 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

Which part of that didn't you like?

I personally welcome the idea of something that addresses actual reality, instead of pretending shit (and genociding isn't necessary)

[–]FuckYourMom 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

If a computer is commuting genocide, you are in the group.

[–]Bigs 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

You're... communicating, right? Try again, as I'm not sure what you're trying to say?

[–]GeorgeCarlin 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Since massive investment is needed to harness the power of big-data exploitation, it surely will become another instrument in an already very big toolbox of the rich to control the poorer masses in the first place. As it even already is for most parts, as far as I know.

Meredith Whittaker and Shoshanna Zuboff wrote a lot about what most likely is happening right now behind closed doors in BigTechs strategy meetings.

But sadly, the day has only 24 hours and I can't read as deeply into every topic that piques my fascination as much as I want to.

I've become more a tinkerer in the last few years than a "politician", so to say.

There are enough people talking about and "discussing" these things already, I suppose. All this talking is one of the reasons why I left Chaos Computer Club after years of participation.

Most of their members are more interested in discussions than actually creating something nowadays, sadly.

[–]Alan_Crowe 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (3 children)

There's a lot of people in love with AI. They see more intelligence than is actually there. There is a long history of this, starting with people being taken in by Weizenbaum's Eliza.

The danger I see, looking twenty years out, is that AI will be given serious responsibilities that it is not ready for, with tragic consequences.

I think this is baked into our approach. We eyeball the output and tinker. Did the AI convince us, humans in 2022, that it was intelligent? Not entirely. So we tinker some more, optimizing AI for convincing humans that it is intelligent. Is it really intelligent? That is a tough question.

We can see a pattern in human behavior. Here are two examples.

We gravitate towards having the computer write poetry. We know that we look for meaning in obscure poems and congratulate ourselves for finding it. So we know that we are setting ourselves up to find meaning in a computer's poems that isn't there. But we do it anyway, and think that the computer is a poet.

We create a computer psychotherapist and gravitate towards having the computer do non-directive therapy. We know that this involves encouraging the patient to find their own solutions. So we know that we are setting ourselves up to credit the computer with a solution provided by a human. But we do it anyway, and think that people being fooled by Eliza is a computer passing the Turing test.

The most urgent project in "AI safety" is accepting that humans exhibit this weird behavior and developing methods to avoid tricking ourselves into believing in the intelligence of stupid computers.

[–]Zapped[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

Good thoughts. Is it possible for AI to evolve on its own, or is it always limited to its initial programming? I've been told that the amount of resources it takes for human-like consciousness limits what can be duplicated artificially, adn that the human brain is at its limit because of overheating.

[–]Alan_Crowe 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

People like to talk about an AI recursively improving itself.

Here is an example of the idea: you write an optimizing compiler for the C computer programming language. You write it in C. Then you compile it with an existing C compiler, such as gcc. Now you have an executable for C. You recompile some of your application programs, and they run faster. That is nice, but your optimizing compiler is slow. So you get it to compile itself. Now it runs faster :-)

The usual ideas about how to write an optimizing compiler are such that if you repeat the process you get no further gains. One alternative idea is that the optimizing compiler searches for optimizations, and it does it the same way that a chess playing program does, with one eye on the clock, so that it abandons the search if it is taking too long. If you follow up this idea, there is a chance that after the first round of self application, the search runs faster, and gets deeper into the search tree, finding new optimizations. Using the compiler to recompile the compiler gives you several rounds of improvement.

This works mysteriously badly. You get tiny improvements that peter out. In 2022 we have no idea how to get recursive self-improvement to take off. Today it is limited to being technobabble for science fiction. But I don't think that we even know where to look for ideas about how to get recursive self-improvement to take off; I don't think that anything will have changed in twenty years time.

I don't have any feel for the far future. I just think that we are heading for a rough patch, where AI's cause disaster by being stupid in surprising new ways. More accurately: humans cause disaster by over-estimating AI and thinking that their is intelligence is more general and more able to cope with the unexpected than it really is.

The latest excitement is doing statistical learning on a large corpus. That is a great way to get excellent results on the central examples in the training data, (the commuter is basically copying humans). But we gravitate towards seeing this as the computer thinking its way through the problem, rather than it having "seen it before" in the sense that it is interpolating the training data. We set ourselves up to to believe that the computer can extrapolate from the training data to overcome new challenges. We know from playing with polynomials that the usual story is that interpolation works just fine, and extrapolation is a disaster. But we ignore that and over estimate the computer.

One further pattern in human behavior is that we love to talk in absolutes: this is possible, that is impossible. John McCarthy noticed that we turn this into an implicit belief that if something is possible, then it should be reasonably easy. I think that doesn't apply to artificial intelligence. We are heading towards a situation in which creating artificial intelligence proves to be too hard for humans in the next say one hundred years, but we cope by pretending and lying and believing our own lies. We put stupid AI's in charge of important things and suffer for because of it. And the underlying error is about dividing into two teams, team NO says AI is impossible, team YES says we've managed to create it. But that division, into YES or NO erases NOT-YET. We forget to guard against AI that looks clever to us, but is actually faking it and is doomed to screw up big time.

[–]Zapped[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Who's to say that AI self-evolution won't be a more efficient than human intelligence and not understood by humans? I remember an experiment with two AI's talking to each other. They developed their own language and the controllers shut the experiment down. It was found that they weren't trying to hide the transfer of data, only making it more efficient. In the end, they were still doing only what they were programmed to do, even if they found a better way.

[–][deleted] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

The Book Club book, Sea of Rust, is all about AI, it's set in a world where robots have replaced humanity. Really good read, I couldn't put it down, I stayed up late finishing it yesterday.

There is a part early in where the address the artificial elephant in the room, these "AI" we have today are merely computer programs and they only can do exactly what they're programmed to. There is no spark of life, what is called in the field, the singularity, when AIs become conscious and not just computer programs. What we have now isn't really AI at all.

Machine learning is where we're at, and there's ethnical considerations, like should a computer program working on your harvested data be allowed to deny a loan? I don't know. That seems ill advised as everything can't and shouldn't be reduced to mere data points, but also the algorithm is likely to be more accurate than a human could regarding risk.

[–]NuclearBadger 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

AI will never replace lawyers or juries, because you can't rig an ai per case.

[–]TimothyMcFuck 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)