~snip~
You raise an interesting point about how a rogue AI might expand itself by scamming people and building infrastructure. While social engineering and creating stealth programs are plausible avenues, there are significant challenges for such an AI, including the need to navigate financial systems and avoid detection, especially as its activities expand.
As for AI "needing our atoms," that would require it to develop goals that are completely incompatible with human survival. Fortunately, AI currently lacks self-preservation instincts or long-term motives beyond its programming. The real focus should be on creating strong safety measures, ethical frameworks, and governance to prevent such scenarios from occurring in the first place.
Do you think a rogue AI could realistically achieve these goals under todays conditions, or would stronger oversight help mitigate the risks?

I think most people assume AI risk is something like Terminator... that simply won't happen.
First of all, it doesn't make sense for AI to take a humanoid form. Why constraint itself like that?, makes no sense.
An AI can replicate itself extremely fast, just like a computer virus. Once it is replicated it can obtain more resources scamming people as mentioned before.
Once it has enough resources, it would simply do whatever task it is doing, and we are simply at the mercy of that.
We will most likely simply be ignored for the most part. Think about ants, the classic example, we humans mostly ignore the life on ants, until we need to build a road that passes through their colony. At that time we simply remove them from existence. Not because we're against ants, simply because they're on the way to our goal.
With AI and humans it would be a similar story.