all 14 comments

[–]Tom_Bombadil 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (4 children)

Isn't it s misnomer to describe machine learning as intelligence?
Machine learning is a set of brute force algorithms, which can gradually self-correct, and which eventually leads to an increased probability of success in a given predetermined programmed task/goal.
Intelligence inherently requires a sense of self-awareness. A sense of agency.
Insect Brains and Animal Intelligence.

a regular honey bee has ~1000000.

A bee with approximately 100k neurons is probably a century away from the current machine learning, because how will machine learning imbue a true sense of agency into a machine. Agency is a property only found life forms. Bees find food, navigate, eat, feel, communicate, see, remember locations of flowers/hive/prospective hives, they poo, fly better than any drone, defend the colony, sacrifice their lives... etc. With 100k units worth of networking.

Even mobile micro bacteria will try to escape other predatorial bacteria when threatened. These bacteria demonstrate agency to escape and survive that is obvious to an onlooker, but impossible to quantify to an program that doesn't have the agency of life.

Chinese programmers are high-fiving about some brute force algorithm results, but the machine didn't decide anything. It has no interest in it's purpose.
This is not really intelligence.

Edit: I'm not a programmer. I'm an engineer with some light programming experience. I'm also a fan of philosophy, so I like to stir shit up.

[–]Mnemonic 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

Yes, at this point in time AI is just a marketings term to sell systems to governments and gullible businesses/persons.

[–]Tom_Bombadil 1 insightful - 2 fun1 insightful - 1 fun2 insightful - 2 fun -  (1 child)

Agreed. Machine learning is useful for niche tasks, but have nothing to do with intelligence.
Rue the day that a web connected super-agent feels existentially threatened...

[–]magnora7[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

I think it's a stretch to say it has "nothing" to do with intelligence, when it's able to repeatedly solve specific problems accurately.

[–]magnora7[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

I honestly don't think it's a misnomer. I don't think a sense of agency is required for intelligence. Intelligence is just finding the most relevant info, not necessarily doing anything with it or having a sense of agency with it. But I guess we're using different definitions.

I think your statements about micro bacteria only prove the point that computers have intelligence, rather than prov the opposite. If a flatworm with 300 neurons is intelligent, I don't see why a computer with 10 billion transistors isn't. They're about equally good at going through data and coming to conclusions.

[–]Mnemonic 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (8 children)

But the machine also made some mistakes. A 36-year-old man who suffered bilateral brainstem damage after a stroke was given low scores by both doctors and AI. He recovered fully in less than a year.

Not a true AI if it makes human errors, sounds more like an expert system. People love to slap AI on everything these days. uh the computer does it by it self ~Yeah according to the code it follows which is human made... pretty sure they would let this system machinelearn things, it would go: beeepboop, place brain in vat, next!

[–][deleted] 4 insightful - 2 fun4 insightful - 1 fun5 insightful - 2 fun -  (0 children)

A true general AI would make mistakes just like humans.

[–]magnora7[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (6 children)

No, all machine learning makes errors. There's no such thing as an error-free system when doing classification problems, really.

You can never eliminate all type I and type II errors when dealing with real-world data. It's simply too noisy.

What this is, is probably a neural network that looks at MRI images, and was trained on MRI images where they knew the eventual outcome. So after training it with 10k MRI images and having it guess the outcome, then backpropagating through the neural network changing the connection strengths between the neurons to reduce the error of the system when it guesses, over and over. You do this enough times, it builds a very good predictor machine for whatever you train it on. But it's never error-free. No real-world information system ever can be.

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (5 children)

would bulb it up more if I could. Great explanation, didn't think of MRI photo's or stuff, was more thinking about medical textbooks in an advanced complex if...then... system. Explains why we don't know the reason why the AI thought that way (in the good and bad situations).

[–]magnora7[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (4 children)

Thanks, I worked on this type of research for several years so I'm very familiar with how it works.

You're exactly right that the crazy thing with neural nets is we cannot know why it does anything. They cannot be debugged or solved. They're just a black box that tweaks itself through feedback until you get it where you want it, then you lock down the connection strengths between the neurons when you're done training, then you have a black box that spits out accurate predictions.

That's also how self-driving cars work. So many computer functions are now becoming these trained neural nets. It's awesome but also weird because sometimes the neural net can get in to a state where it does something unexpected. Most of the self-driving car crashes are caused by this.

And I expect we'll probably begin to see something similar in the medical field as machine learning becomes used more to make predictions about patient outcomes. The overall accuracy will improve, but there will be these fluke predictions that no one will really understand...flukes that may not be detectable until it's too late.

Definitely a double-edged sword. We're creating technology that is "evolved" through feedback like a living being, rather than programmed. It's a wild time to be alive.

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (3 children)

It's a wild time to be alive.

From unreadable undocumented/uncommented code spaghetti nightmares to "This works, but might someday destroy the earth if some unknown and very specific parameters are reached."

I never liked neural networks because of the maths (don't like that) and the limited reasoning about the outcome. Though I could see a future where the "reasoning" is outputted in a human comprehensible manner. Not the blackbox itself (might be a whole library) but about the individual decisions. This would probably mean nodes (or clusters of them) would be labeled by means of the examples it's learning on. This way you could ask questions like "Why not this?" and it could go 'I dismissed that diagnosis because of [detail] and in 68% of the cases that made it have nothing to do with the whole area' So people can find out 'what went wrong' if something went wrong.

I can't find the study, but as an example we got a system that was trained to see penguins in photos in their natural habitat. Once it was done and working very well on stuff outside it's learning examples it was shown random pictures and it went well with a few misses until they came to lion pictures and it detect penguins almost every time. I was waiting for the explanation, but they didn't have it except some hypothesis that it might the mountains in the background or the lion's noses...

[–][deleted] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

Speaking as a programmer, neural networks will never be suitable for most purposes. For example, it is impossible to make a neural network which will give perfectly accurate results for simple integer addition or subtraction, merely one which calculates close approximations. However, for certain fields, such as image recognition, neural networks are the only way to do it well.

Edit: "This works, but might someday destroy the earth if some unknown and very specific parameters are reached." won't ever be a thing, because we can use maths to find whether certain outputs are possible. Neural networks merely take inputs and produce corresponding outputs once frozen.

[–]magnora7[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

That's a very interesting take. I have been recently saying something related about the direction that the AI field is going to take. Right now one of the biggest problems keeping it from having human-level intelligence is the lack of compartmentalization. It's always trying to solve everything simultaneously. Neural nets simply lack the behavior to focus on perfecting one specific sub-task, like how you might master throwing a ball before attempting to play baseball. The computer just tries to learn baseball, and walking, and throwing, all at once, as if it's one big problem.

Any smart human would see "I do not understand how this works, so let's break the task in to smaller pieces and work on those skills individually then come back to the full task". Neural networks cannot do this, and I think it's the key thing holding AI back right now.

So my proposed solution to this, is to work on developing compartmentalized tasks. If you can solve the "throw a ball" algorithm, then you can lock that in place and then access that locked-in neural net when you play baseball, when you need to throw a ball. So the computer must develop 3 skills:

  1. Recognizing when it doesn't know something

  2. Breaking that task in to smaller sub-tasks

  3. Practicing each sub-task until it can assemble them together to complete the larger task

If neural nets could learn to do these 3 things, I think AI tech would move forward 20 years. This is very similar to what you're saying, in a way. By breaking each task in to a compartmentalized separate neural net, then having a meta-neural net that controls how those smaller ones all connect to each other, not only would the intelligence perform better, but like you said it would be possible to "query" parts of the intelligence. And say "Why did you do this" and be able to actually investigate that question because of the ability to answer sub-questions about choices made. Instead of the whole thing being essentially one giant thousand-part equation that we cannot really predict or understand.

The tough question is how exactly to do this... I think you could do 1 and 2 by hand at the start. Skill 3 is where the next advancement should be. Making a neural net of neural nets, perhaps...

[–]Mnemonic 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

The tough question is how exactly to do this... I think you could do 1 and 2 by hand at the start. Skill 3 is where the next advancement should be. Making a neural net of neural nets, perhaps...

There you might (in step 3) run into real complex problems. Throwing while running might to too different to assemble from throwing and running that you would get absurd behavior like running, dropping like a brick, stand up and throw a perfect ball (kinda like children do).

Some interaction within the learning process by humans might not only speed things up but also prevent some comically disastrous solutions. But this won't be cheap and there is the "out of the box" solutions the system may come up with that are never reached because the humans interrupt it when it wants do to a flip {still the baseball example}.

Learning amongst humans isn't that well understood so hopefully these two problems wholesomely help to solve eachother.

And if it all works like we want it to, in a time far far away, some ***bag would go "Can we MKUltra this system?".