Do we first need to figure out how consciousness works before it can emerge in a machine?
We probably need to have a good understanding of it at least, though perhaps we donβt need a 100% complete one. Thenβ¦ However many tries it takes to get it right.
I doubt it would be something like the movie Chappie.
Never heard of that movie, is it about this kind of stuff?
A programmer basically creates a program to accurately model consciousness and is forced to put it in a defense robot he built, causing the same robot to start learning things.
I feel like creating an actual artificial counciousness would take a good while. Like a decade if not more.
You donβt design them, they design themselvesβ¦ so, they donβt require your knowledge. Humans invented cloning before they discovered the fire. Its called reproduction. Sometimes the best design is the one no one designed. If it physically evolved once, it can evolve again in simulation.
Pretty sure that was βinventedβ by evolution and not humans since the first cell would not have been a human, would it?
Not all approaches will read to a correct solution, some catchment basins lead to dead ends. Maybe a majority of them evenβ¦
Maybe it canβt change its initial goals, unlike humans who look more unpredictable. There is no way to know what they can gain and canβt gain. Everything is a blackbox. When I donβt know anything about a topic, I at first assume everything is possible. Is that a correct default assumption? Should there even be defaults?
I still think itβs much more likely the thing is overhyped due to baseless advertising than it actually being able to deliver what it promises