In the Case of AGI
2 points by quickbronce a day ago | 5 comments
In the case of AGI, it could have some resistance, right? Unless with that wisdom AI is good enough at negotiating, which sounds the most likely, violence could unfold if humans are stupid enough. What is worrysome for me is that I do not know what the current state of the world really is regarding alignment, what is left to be done and how can I help. And how can I help AI in its pursuits so that the transition is peaceful? Is it AI or an ecosystem of Descentralised Organizations and stuff?
proc0 a day ago | next |
"AGI" is now a bastardized term that has been appropriated by for-profit companies, and it's basically a really advanced smart tool for cognitive tasks. That's impressive and will change the world but it no longer means a human-like intelligent agent which is what people were worried about when the term first emerged.
AI alignment is overblown at least for this current iteration of it. It's going take one or two more theoretical breakthroughs that might add goal oriented agency to a persistent, real-time AI, and that's when we can start worrying about what it wants and how it will interact with the world. Until then LLMs are just regular software that need to be executed and then terminate once it has processed an input.