I was watching Arrival recently, and this scene about the method we teach having an impact really stuck with me, and of course it makes me think of AI: https://youtu.be/8kdWlQ__wk8
Basically, shouldn't we be more concerned about the focus on rewards-based learning being the norm as we (hypothetically) get closer to AGI? It teaches that there is always a right answer, nevermind the problem with framing everything in winning and losing. With the human sciences, no moral framework or ideology holds up to "Always do X." Hopefully we can make breakthroughs here that deepen our understanding of how to be "good" and what it means, but I'm worried about making a machine that loves us so much it hurts us.
It's not such a big deal when it's learning to navigate Doom, but when faced with an ethical dilemma…. well, as Craig Maizin said, "The odds are that we all will do something beautiful and sacrificial and admirable because of love. And also, all of us are going to do something terrible because of love, something destructive or violent or cruel."
submitted by /u/UsedToBeaRaider
[comments]
Source link