So AGI might be smart enough to build other smart machines that might destroy us, probably without any intention. It is just its programming.
Well, that would happen as AI is free to think and do what it wants to do.
How to regulate this technology? is it that alarming or is humans making their own destructive machines?