I think, you need say about conscious in this theme.
Here are interesting videos about consciousness :
Here is an interesting link about machine learning : https://experiments.withgoogle.com/collection/ai
Has anyone else noticed that we really like Kurzgesagt on this forum?
Who wouldn’t? It’s an amazing channel. Plus, I really like the narrators voice. By the way has anyone seen the new one, The Egg? It’s different, but really good. I’ve heard of this philosophy before and I think the video really encapsulated it.
Which would be better
- One AI controls all self driving cars.
- Self driving cars all have own AI.
0 voters
This is in relation to inter car communication and course / speed readjustment to ensure fastest traffic flow.
Both have ups and downs. One individual AI is superior in most ways. Traffic would be less common, but if the AI became corrupted everything would go to . Individual AI would work in much the same way, but might not see the big picture and just look for cars around it, instead of them communicating information.
The egg as a story has existed for a long time, Kurzgesagt just animated it and added the voice over.
I need an explanation on why The Egg was put on Kurzgesagt’s channel. Kurzgesagt is a more scientific channel. ???
If you entrust a single AI for all cars, you’re basically asking for all the cars to be hacked simultaneously.
Just for the thematic of this thread, you listen to these musics :
Here is where I got the term ‘James Webb’s telescope’ into my head.
Here is the main page of the Webb telescope project.
And here is finally a vr game on steam about the webb project.
Thanks, and have fun! -gabeN
You are also asking all the security experts of all the car companies to work on securing it.
If it ever gets hacked it’s obviously bad, but if each car company has their own single AI, they might have really bad security practices (most companies that aren’t focused on software don’t know a whole lot about security, many don’t even acknowledge the fact that you need security specialists), making it much more common for cars to be hacked. So I think a central system that can coordinate all cars is much more beneficial (and maybe even much more secure) than having a bunch of different systems where the hackers can just pick the worst secured one and focus on breaching that.
I guess one option re AI is having a more modular approach. So for example each car could have it’s own ability to navigate and then there could also be a routing AI for each city (kind of like air traffic control) which just tells each one which routes to take.
I agree hacking is an issue however it’s probably less dangerous than letting other humans drive cars ha ha
Re the James Webb Space Telescope that thing is so risky, apparently it has 10 new technologies which haven’t been used in space before and there’s plenty of potential for the solar shield to fail to unfurl properly or for the mirrors not to open properly. Because they are moving it to the Lagrange point so it’s shielded by the Earth from the sun it means no repair missions are possible. I think it’s quite likely it will fail somehow (hubble needed a mission to repair it and Kepler’s guidance system failed for example) and will end up just being very expensive space junk.
However I would love it if it worked
Is it currently possible to send data back it time?
If not, one of two possibilities are true;
- I was supposed to message 2021, see y’all then
- I heard it in a creepypasta
I read a story once where the characters had a chat client that could send messages to the past.
But instead of telling the future to their past selves, they just fought with them selves. Also there were larger implications. Like if you see a message from future you, you will eventually have to type the message. and that brings up a whole bunch of problems with free will. And even if you did warn you successfully, nothing would change, because you already lived through the warning you. (If you know the story I am talking about. Do not speak to me about it.)
Yes. There’s even a wikipedia article:
Sorry for the BIG Necro, but I have something relevant I want to share.
AI probably cannot become sapient. Neural networks are essentially just collections of matrices that have operations done on them by a computer. It would be pretty silly to say that that can become sapient. AI could probably behave sapient, but as it exists now, probably cannot have a subjective experience.
Literally doesn’t explain why it would be “pretty silly”.
Also, artificial neural network technology is modelled after how biological neural networks work.
It’s silly because then any old computer program could become conscious. It doesn’t matter if they’re modeled after human brains, they still work completely differently.
Since neural networks copy neurons, there are 3 options.
-
We have consciousness. So does the AI, or at least when it becomes as complex as us, or starts having the same types of patterns as us
-
Consciousness is not caused by neurons firing, for example, our brains may actually be in a quantum superposition, one of the possible states they can be in is constantly being choosed by the “real consciousness” pulling the strings from behind
-
We don’t have consciousnessess. Life is VR world we all entered, and forgot that we did. In the real world, we are not made out of neurons.